The Devops Google Cloud
Por: Cristian Trucco • 14/8/2024 • Exam • 7.280 Palavras (30 Páginas) • 39 Visualizações
Pergunta 1Correto
As an Event ticketing platform company using Google Cloud Services, following Site Reliability Engineering practices, you are the Incident Commander for a new, customer-impacting incident. Which two incident management roles should you assign immediately for an effective incident response?
Your answer is correct
Lead of Operations.
The position of External Customer Communications Lead can be described as a Customer Impact Assessor.
Lead Engineer
Lead in Communications
Explicação geral
and Communication Lead. Explanation: In Site Reliability Engineering (SRE), incident management is a crucial process that helps to detect, respond, and resolve incidents effectively. To ensure an effective incident response, two key incident management roles that should be assigned immediately are: 1. Operations Lead: The Operations Lead is responsible for leading the technical response to the incident. They are responsible for triaging the incident, identifying the root cause of the problem, implementing the necessary fixes, and restoring the service to normal as quickly as possible. They work closely with the other incident management roles to ensure that the incident is resolved as efficiently as possible. 2. Communication Lead: The Communication Lead is responsible for managing the communication during the incident. They are responsible for communicating with the customers, stakeholders, and the incident response team to provide updates on the status of the incident. They also ensure that the incident response team is aligned on the communication strategy and that all stakeholders are informed of the incident and its impact. Assigning these two roles immediately ensures that the incident response team is well-organized, efficient, and effective in resolving the incident.
Pergunta 2Ignorado
As an online grocery delivery service company, you have a Node.js application running on Google Kubernetes Engine (GKE) that interacts with several dependent applications via HTTP requests. How can you proactively identify the dependent applications that may impact the performance of your application on GKE?
Utilize Cloud Debugger for analyzing the logic execution of every application and instrumenting all applications.
Revise the Node.js application to record the duration of HTTP requests and responses to linked applications. Utilize Cloud Logging to detect linked applications that have suboptimal performance.
Make sure to equip all the applications with Cloud Profiler.
Correct answer
Cloud Trace should be used to instrument all applications, and it is recommended to review HTTP requests between services.
Explicação geral
The answer is correct because Cloud Trace is a tool by Google Cloud that helps in identifying the performance issues in the application by providing end-to-end visibility into the application's performance. By instrumenting all the dependent applications with Cloud Trace, you can review the inter-service HTTP requests and understand the time taken in each request. By doing this, you can identify the dependent applications that are causing performance issues and take necessary actions to optimize the performance of the application. In this scenario, as an online grocery delivery service company, it's crucial to ensure that the Node.js application running on GKE interacts efficiently with the dependent applications. By proactively identifying the dependent applications that may impact the performance, you can optimize the performance of the Node.js application, provide a seamless grocery delivery service to the customers, and improve the overall customer experience. Therefore, instrumenting all applications with Cloud Trace and reviewing inter-service HTTP requests is the right approach to proactively identify the dependent applications that may impact the performance of your application on GKE.
Pergunta 3Ignorado
As an online therapy platform company, how can we grant some members of our team access to export logs written to Cloud Logging in Google Cloud?
Set up Access Context Manager in a way that permits solely the designated members to export logs.
Generate a personalized IAM role that includes logging.sinks.list and logging.sink.get authorizations.
To permit log exports creation, establish an Organizational Policy within Cloud IAM that authorizes only specified members.
Correct answer
On Cloud IAM, provide the team members with the IAM role of logging.configWriter.
Explicação geral
The answer is correct because granting the IAM role of logging.configWriter on Cloud IAM allows team members to configure logging sinks, create logs-based metrics, and export logs from Cloud Logging. The logging.configWriter role includes permissions to create and update log sinks, which enable exporting logs to various destinations such as BigQuery, Cloud Storage, or Pub/Sub. This role is a predefined role in Google Cloud IAM that provides the appropriate permissions to manage logging configurations. By assigning this role to team members, they can access the necessary resources in Cloud Logging to export logs, without granting them unnecessary access to other resources or functionalities. Therefore, granting the logging.configWriter role is the most appropriate way to grant export access to logs in Cloud Logging for specific team members.
Pergunta 4Ignorado
As a Mobile payment solution company, you want to ensure fast response time for your payment processing. What is the Google Cloud-recommended way of implementing a Service Level Indicator (SLI) to measure the latency of payment processing requests and ensure that the acceptable response time is within 50 ms?
Correct answer
To calculate the percentage of home page requests that load under 100 ms, you need to determine the number of such requests and divide it by the total number of home page requests.
To calculate, determine the quantity of homepage requests loading within 100 ms and divide it by the overall number of requests for the web application.
Group the request latencies into specific ranges and then determine the median and 90th percentiles.
A way to calculate percentile at 100 ms is to group the request latencies into different ranges and then bucketize them.
Explicação geral
The answer provided for the given question is incorrect. The recommended way to implement a Service Level Indicator (SLI) for measuring the latency of payment processing requests is by defining a threshold for acceptable response time and then tracking the percentage of requests that meet that threshold. In this specific case, the threshold for acceptable response time is 50 ms. So, the SLI should be defined as the percentage of payment processing requests that are completed within 50 ms. This can be measured by tracking the number of payment processing requests that are completed within the defined threshold and then dividing it by the total number of payment processing requests. It is not appropriate to count the number of home page requests that load in under 100 ms and then divide it by the total number of home page requests as it is not relevant to measuring the latency of payment processing requests.
...