AWS ECS + WSO2 APIM with ELK

Lakmini Wathsala
6 min readDec 20, 2021
Courtesy: https://en.wikipedia.org/wiki/File:Scrum_ASM_Clermont-Saracens.jpg

👐 Deploying WSO2 APIM in AWS ECS cluster

WSO2 APIM is a fully open-source API Management solution that can be deployed on-premise, cloud, or hybrid solution. Here we are focusing on APIM deployment in AWS ECS-Fargate Cluster running with docker images.

❓What is ELK??

“ELK” stands for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a distributed, open-source search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. Logstash is also a free and open server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch. Kibana is a free and open user interface that lets users visualize data with charts and graphs in Elasticsearch.

Additionally, there are lightweight, single-purpose data shippers into the ELK Stack equation which is called Beats.

🚦 ELK with WSO2 APIM

There can be a requirement of analyzing WSO2 logs from a 3rd party or having these logs in a common place. This can be achieved with different options in AWS-based deployment ex: ELK + Beats, AWS FireLens, AWS cloudwatch logs, etc. From this blog post, I will be going through the Option ELK + Beats.

As shown in the above diagram, the API Manager will publish logs to a predefined directory and that directory is mounted into Filebeat. Then Filebeat will stream the new logs to Logstash. In Logstash we can have filters and based on the predefined filters it will decide what logs should be fed into ElasticSearch. When passing logs to ElasticSearch, Logstash has the capability to parameterize the data, so those parameters can be used to query them accordingly. Kibana will be fetching data from ElasticSearch and visualizing them according to the user's needs. Any custom dashboard can be designed using the defined parameters.

🌱 Implementation

✋ Install WSO2 APIM and configure it in the cluster

# login to wso2 docker registry
docker login docker.wso2.com
#pull updated wso2am image (X=Latest update level)
docker pull docker.wso2.com/wso2am:4.0.0.X
#tag docker image with ECR hostname
docker tag docker.wso2.com/wso2am:4.0.0.X <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/wso2am:4.0.0.X
#login to aws ECR
aws ecr get-login-password — region us-east-1 | docker login — username AWS — password-stdin <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com
#push image to ECR
docker push <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/wso2am:4.0.0.X
  • That image will be used(as the value for Image*) in the AWS ECS Fargate cluster while creating the task definition(in ‘Add Container’).

And we need to add a mount point in the same container to mount the ‘/repository/logs’ directory.

  • Then the created volume can be selected from the drop-down menu of ‘Mount points’ and map with the container path while adding the container.

The summary of the container will be as below.

Please refer to the blog post — https://mcvidanagama.medium.com/deploy-wso2-api-manager-in-a-aws-ecs-fargate-cluster-b97c7275f861 on deploying APIM in the Fargate cluster.

✋ Create an EC2 instance for EFS

Please refer https://docs.aws.amazon.com/efs/latest/ug/gs-step-one-create-ec2-resources.html on creating an EC2 instance for the previously created EFS. And grant permission to ‘wso2carbon’ user of ‘wso2’ group for the instance.

# groupadd wso2
# useradd wso2carbon
# sudo chown -R wso2carbon:wso2 /mnt/efs/fs1/

✋ Create an EC2 instance for ELK

Create an EC2 instance for ELK servers referring to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html

Please refer to the below sample commands to download the servers in a Linux-based EC2 instance.

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.14.0-linux-x86_64.tar.gztar -zxvf elasticsearch-7.14.0-linux-x86_64.tar.gz

ELK Stack
Filebeat, Logstash, ElasticSearch, and Kibana can be downloaded from their website.

Configure and start the servers

🎈WSO2 APIM

  • Create a directory called ‘wso2_logs’ manually in the EC2 instance for ELK servers and mount the EFS to this instance.
$ sudo mount -t efs -o tls <efs-id>.efs.us-east-1.amazonaws.com:/ /home/ec2-user/wso2_logs
  • Execute the below command to enable EFS features in the EC2 instance for ELK servers.
yum install -y amazon-efs-utils
  • Restart the server(task) and make sure that log files are writing to the /home/ec2-user/wso2_logs location of the EC2 instance of ELK servers.

🎈Filebeat
All the configuration required for this needs to be done in ‘filebeat.yml’ file. Let’s configure log type and point the path to ‘wso2carbon.log’ file as a Filebeat input. If there is a requirement to feed all the log files then instead of specifying the log file name, ‘*’ can be used. Change the path according to your file system.

filebeat.inputs: 
- type: log
enabled: true
paths:
- /home/ec2-user/wso2_logs/wso2carbon.log

Then start the component by executing the ‘filebeat‘ script.

./filebeat

🎈Logstash
It is needed to configure a GROK filter and save it as ‘logstash-beat.conf’. With the below sample filter, it is passing all the logs. However you can write your own GROK filter according to the requirement.

input {
beats {
type => "beats"
host => "<?>"
port => 5044
}
}
output {
elasticsearch {
hosts => ["<host>:9200"]
}
stdout {
codec => rubydebug
}
}
filter {

grok {
match => [ "message", "TID: \[%{INT:TID}\] \[\] \[%{TIMESTAMP_ISO8601:timestamp}\]\s+%{WORD:loglevel}\s+{%{JAVACLASS:java_class}}%{GREEDYDATA:FlowMessage}" ]
tag_on_failure => ["failed-to-parse"]
}

}

Start the component.

./logstash -f ../config/logstash-beat.conf

🎈ElasticSearch
Just start the component with default configurations. We need to start the ElasticSearch server before starting the Kibana server.

./elasticsearch

🎈Kibana
Change the “server.host:” of kibana.yml with the private IP address of the EC2 instance. Finally, start the Kibana server.

./kibana

🌾 Monitoring the logs

With http(s)://your_kibana_host/5061 you can visit the Kibana server. After some data (logs and statistics) has been pushed to ElasticSearch you will be able to view the data by making a filter. At first, let’s make the index pattern as “logstash-*” and it will show all the available variables including the custom variables on the Kibana dashboard.

Then you can create the required dashboard by choosing the needed variables.

And we are DONE!!! 🎉 🎊 🎉 🌸 🌸Happy Stacking!!!🌸 🌸

--

--