First steps in Monitoring Micronaut apps with Prometheus and Grafana
In previous stories with Micronaut, we followed some guidelines on how to implement a microservice with JWT authorization, providing documentation using OpenAPI, accessing a database like MySQL, or Postgres within a native image, and deploying them into Google Cloud Run.
Assuming our microservice is in production, the next steps in this series lead us to the observability topic, in order to keep track of what is happening in our cloud environment and be able to observe the trends in the performance, being notified when something is clearly wrong, or search about specific details in some remote logs.
Let’s start here with the first story about Micronaut and observability, by enabling monitoring and taking advantage of pretty handy integrations with a quite popular tool combo: Prometheus and Grafana.
Monitoring
Monitoring requires explicitly registering certain metrics, which are going to be part of some queries to obtain the right insights to properly understand what happened in each of our particular scenarios.
For monitoring we need to identify the data to pick from our service and how to harvest those metrics to learn about the story behind
Therefore, our goal is to select some metrics from a Micronaut service, like CPU usage, Memory issues, Threads behaviors, Database accesses, or HTTP successful requests, and visualize them inside a colorful dashboard (example)
Metrics
Time to be hands-on, and as usual I will suggest creating the project with Micronaut Launch to make it quick and easy. For this one, we only need to add explicitly one plugin to explicitly support: Micrometer and Prometheus.
With just one click, and without further configuration or code, our Micronaut app is already generated and provides out-of-the-box a set of common metrics, that could be listed and consumed through the /metrics endpoints.
However, since we are going to use Prometheus to consume them, let’s enable explicitly the /prometheus endpoint. The resultant configuration in our application.yml should look like this:
micronaut:
application:
name: metrics
metrics:
export:
prometheus:
enabled: true
descriptions: true
step: PT1M
enabled: true
endpoints:
prometheus:
sensitive: false
Moreover, in order to have more fun and enrich this example, we are going to expose a simple endpoint (/ping/$name), where we are going to include a custom metric explicitly. To do so, we should take advantage of the already provided meterRegistry from the plugin, and use the Micrometer API to aggregate that data (ex. counting the calls) :
@Controller("/ping")
class PingController {
@Inject lateinit var meterRegistry: MeterRegistry
@Get("/{name}")
fun ping(@NotBlank name: String): String {
meterRegistry.counter("ping", "param", name).increment()
return "pong $name"
}
}
Finally, let’s write some tests to verify the metrics and our endpoint altogether. One particular case that we can add on top, is the fact that the custom ping metric is not going to appear until we perform the call, being able to check explicitly if this made the trick.
fun verifyPrometheusEndpointTest() {
// when:
val request: HttpRequest<Any> = HttpRequest.GET("/prometheus")
val rsp = client.toBlocking().exchange(
request, Argument.of(String::class.java)
)
val body: String = rsp.body() ?: ""
// then:
assertEquals(HttpStatus.OK, rsp.status)
assertTrue(body.isNotBlank())
assertTrue(body.contains("http"))
assertFalse(body.contains("ping"))
}fun verifyPingEndpointTest() {
// when:
val request: HttpRequest<Any> = HttpRequest.GET("/ping/test")
val rsp = client.toBlocking().exchange(
request, Argument.of(String::class.java)
)
val body = rsp.body()
// then
assertEquals(HttpStatus.OK, rsp.status)
assertTrue(body == "pong $name")
}fun verifyPrometheusCustomMetricTest() {
// when:
val request: HttpRequest<Any> = HttpRequest.GET("/prometheus")
val rsp = client.toBlocking().exchange(
request, Argument.of(String::class.java)
)
val body: String = rsp.body() ?: ""
// then:
assertTrue(body.isNotBlank())
assertTrue(body.contains("http"))
assertTrue(body.contains("ping"))
}
Last but not least, we just need to execute the tests (./gradlew test) and be sure our endpoints are working as we expect:
mnk.metrics.controller.PingControllerTest
✔ Metrics Endpoint Successful Test
✔ Prometheus Endpoint Successful Test
✔ Ping Endpoint Successful Test
✔ Custom Metrics Successful Test
✔ Prometheus Custom Metrics Successful Test
To check the code directly, which contains more details on the testing part, please be my guest: https://github.com/rmondejar/mnk-metrics-example
Visualize
At this point, we already finished our work with Micronaut, which is pretty similar to other JVM frameworks, to be honest, and can just deploy it in an environment where all the tools are ready to go.
But if you want to continue, and get your hands dirty, let’s then talk briefly about the whole system and the required tools to test it locally.
- Expose metrics (Micrometer): we should provide the data in the required format to be pulled, just as we did in the previous section.
- Pull metrics (Prometheus): designed to operate on a pull model, periodically scraping metrics from application instances, based on service discovery, and working as a data source.
- Visualize metrics (Grafana): collecting data from different data sources, allowing us to create insightful dashboards that are updated periodically, and can trigger specific alerts as well.
Microservice
Now we are ready to run everything to visualize our data, and if it is not alive, let's run again our microservice and perform some checkings:
./gradlew run
curl http://localhost:8080/ping/hello
curl http://localhost:8080/ping/worldcurl http://localhost:8080/metrics/pingResult:
{ "name": "ping",
"measurements": [{
"statistic": "COUNT",
"value": 2.0
}],
"availableTags": [{
"tag": "param",
"values": [
"world",
"hello" ]
}]
}curl http://localhost:8080/prometheusResult:
(...)
# HELP rest_ping_total
# TYPE rest_ping_total counter
rest_ping_total{param="hello"} 1.0
rest_ping_total{param="world"} 1.0
(...)
In order for the custom metric (ping) to appear here, the line of code that contains the meterRegistry service call must be executed at least once.
Prometheus
Second, if we don’t have a remote one yet, we should install a Prometheus instance, for example locally, and add our service into the configuration file
brew install prometheus
vi /usr/local/etc/prometheus.yml
- job_name: "micronaut"
metrics_path: "/prometheus"
scrape_interval: 5s
static_configs:
- targets: ["127.0.0.1:8080"]
Next, we can start our Prometheus service and check the targets
brew services start prometheus
open http://localhost:9090/targets
And our microservice endpoint should be shown as up and running:
Finally, we should check some metrics in the Graph tab, by searching for some of them like jvm_threads_states_threads or our ping_total. Hopefully, everything is correct and we are able to obtain the current values.
Although we are able to visualize our data already, our goal is to use powerful dashboards with tons of possibilities.
Grafana
Finally, we reach the last step, and it is going to be more straightforward than you can usually imagine the first time. Then, we need to install and run Grafana locally:
brew install grafana
brew services start grafana
open http://localhost:3000
In order to sign in, just use admin both for user and password fields. Once you are in, the first step is to integrate tools by creating a new data source that connects with our local running Prometheus instance.
open http://localhost:3000/datasources/new
On that form, just be sure that the URL is pointing to http://localhost:9090, test & save, and go to the main page to create a brand new dashboard.
open http://localhost:3000/dashboard/new
On our empty board, we should add a few panels to test it out, setting the title, the type, and the metrics browser in each of them:
- Ping Calls (Heatmap): sum(increase(ping_total[1m]))
- HTTP Server Success(Graph): sum(increase(http_server_requests_seconds_count{status=~”2..”, uri!=”/prometheus”, uri!=”/metrics”}[1m]))
- Http Max Duration(Stat): max(http_server_requests_seconds_max{status!~”5..”})
You can play a little with the params, adding more series to the graph, like for example visualizing separately what each endpoint is producing or creating a new panel to show the HTTP errors explicitly.
At this point, you are already monitoring your service, and you should replicate this into the cloud to rest assure everything is in place.
Summary
As the first step in our observability journey with Micronaut and with a few configurations and code lines, we are able to set Micrometer and Prometheus.
If you prefer to go directly to the code and try to replicate this locally, which I strongly recommend, just check the project repository: https://github.com/rmondejar/mnk-metrics-example
What now? well, our bucket list for the next steps should already have:
- Securitize our endpoints with Micronaut Security
- Pick more metrics and add more panels/rows to our dashboard
- Learn the Grafana query language and how to use it properly
After that, aside from monitoring, we should move forward into the observability topic: setting alerts to be automatically notified after something important for us happen, populating remote logging to collect more information after specific events, or enabling distributed tracking to obtain the big picture of our microservice architecture.
A lot of fun and help there for sure.
See you next time!