logo
Troubleshooting the ML Commons Framework

Troubleshooting the ML Commons Framework

You have completed the implementation of OpenSearch models, but it seems that something is not functioning correctly. In order to resolve this issue, it is important to learn how to troubleshoot the engine that powers the models feature.

Published Dec 20, 2023
In this series, you have been exploring the models feature of OpenSearch. By now, we hope you are aware of its capabilities and are enthusiastic about the incredible possibilities it offers for building. However, it is unrealistic to assume that everything will be perfect. You need to know how to troubleshoot problems that may arise when things don't go as planned.
Here, you will learn a few things about the OpenSearch ML Commons Framework that will help you feel comfortable enough to troubleshoot issues on your own. We hope you won't have to—but as Thor said in the Thor Ragnarok movie: "A wise king never seeks out war. But he must always be ready for it."

If there is one pattern about the adoption of any modern software technology is the fact the most users struggle with problems related to the default values of important settings. Not knowing who they are, not knowing what are their default values, and what is their impact when deploying applications is a source of many problems. With the ML Commons Framework is no different.
You should spend some time knowing what are the settings available in the ML Commons Framework and what are their default values. To query about all the settings from this plugin and retrieve their default values, you can use the following command:
1
GET _cluster/settings?include_defaults=true&filter_path=defaults.plugins.ml_commons
You should see the following output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
{
"defaults": {
"plugins": {
"ml_commons": {
"monitoring_request_count": "100",
"allow_custom_deployment_plan": "false",
"sync_up_job_interval_in_seconds": "10",
"ml_task_timeout_in_seconds": "600",
"task_dispatcher": {
"eligible_node_role": {
"local_model": [
"data",
"ml"
],
"remote_model": [
"data",
"ml"
]
}
},
"trusted_url_regex": "^(https?|ftp|file)://[-a-zA-Z0-9+&@#/%?=~_|!:,.;]*[-a-zA-Z0-9+&@#/%=~_|]",
"rag_pipeline_feature_enabled": "false",
"task_dispatch_policy": "round_robin",
"max_ml_task_per_node": "10",
"exclude_nodes": {
"_name": ""
},
"model_access_control_enabled": "false",
"native_memory_threshold": "90",
"model_auto_redeploy": {
"lifetime_retry_times": "3",
"enable": "false"
},
"jvm_heap_memory_threshold": "85",
"memory_feature_enabled": "false",
"only_run_on_ml_node": "true",
"max_register_model_tasks_per_node": "10",
"allow_registering_model_via_local_file": "false",
"update_connector": {
"enabled": "false"
},
"max_model_on_node": "10",
"trusted_connector_endpoints_regex": [
"""^https://runtime\.sagemaker\..*[a-z0-9-]\.amazonaws\.com/.*$""",
"""^https://api\.openai\.com/.*$""",
"""^https://api\.cohere\.ai/.*$""",
"""^https://bedrock-runtime\..*[a-z0-9-]\.amazonaws\.com/.*$"""
],
"remote_inference": {
"enabled": "true"
},
"connector_access_control_enabled": "false",
"enable_inhouse_python_model": "false",
"max_deploy_model_tasks_per_node": "10",
"allow_registering_model_via_url": "false"
}
}
}
}
As you may have noticed; there is a fair amount of settings being used by the ML Commons Framework. Some of them are self-explanatory and I won't go over in detailing all of them. Instead, below, I summarize the top five settings you may want to know in more details.
  1. plugins.ml_commons.model_access_control_enabled: models deployed at OpenSearch can be fully controlled with granular roles that you can tie to them. This setting enables that behavior, as opposed to allow anyone to use models anytime they want. If perhaps you are working with a cluster with this setting enabled; you may want to review if someone didn't associate a role to the model, which may explain why you are getting access errors every time you try to deploy the model.
  2. plugins.ml_commons.native_memory_threshold: this setting sets an upper bound limit about how much tolerance for the RAM memory (also known as native memory) until it stops allowing tasks to execute. It defaults to 90, which means that if the RAM memory is over 90% of utilization, a circuit breaker will stop tasks from being executed. For a really busy OpenSearch cluster that also has to serve search requests, this may be something you want to watch out.
  3. plugins.ml_commons.jvm_heap_memory_threshold: this setting sets an upper bound limit about how much tolerance for the JVM heap memory until it stops allowing tasks to execute. It defaults to 85, which means that if the JVM heap memory is over 85% of utilization, a circuit breaker will stop tasks from being executed. It is important to note that the JVM heap may reach this threshold more frequently during peak times. Once the garbage collection finishes, the heap memory will shrink, but it may fill up back again pretty quickly.
  4. plugins.ml_commons.model_auto_redeploy.enable: As you may have learned at this point, every time you deploy a model, this is executed by a task in the OpenSearch cluster. At any time, the nodes responsible for executing these tasks can fail, and by default, there is no "do it again" according to this setting. Setting this to true tells OpenSearch to attempt a redeploy if a model is found not deployed or partially deployed. This may explain why, even after bouncing your cluster, the model still doesn't work. When this setting is set to true, you can optionally use the property plugins.ml_commons.model_auto_redeploy.lifetime_retry_times to specify how many redeploy attempts should happen.
  5. plugins.ml_commons.trusted_connector_endpoints_regex: this setting controls which endpoints are allowed to be used to handle inference requests. By default, only a small set of endpoints is on the list. If you ever need to use a custom model, you will need to add your endpoints to this list. Failing to do so may be the reason why your models are shown as deployed, but always fail to handle inference requests. It just means your endpoint is not white-listed.
While the settings discussed above have to do with problems related to the plugin behavior and the problems that may rise for you not knowing them; the plugins.ml_commons.max_ml_task_per_node setting is a bit more tricky, as it has to do with resource utilization. Problems related to resource utilization only rise under certain load conditions and are harder to identify and troubleshoot. In a nutshell, this setting controls how many tasks ML-nodes are allowed to execute. For small workloads where there are not a bunch of concurrent tasks being executed, this won't be a problem. However, think about scenarios where you have fewer ML-nodes and they are responsible for handling a considerable amount of tasks.
It may hit the limit imposed by the default value, which is 10. If you need to scale up more tasks per node, you can increase the value of this setting to something higher. However, there is another trick that you must be aware of. Tasks are executed as threads, and these threads are taken from a pool. Even if you increase the number of tasks that a ML-node can handle, you must ensure the thread pool for specific tasks is large enough to afford the amount of concurrency needed. To query about the thread pools used by the ML Commons plugin, you can use the following command:
1
GET _cluster/settings?include_defaults=true&filter_path=defaults.thread_pool.ml_commons
You should see the following output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
{
"defaults": {
"thread_pool": {
"ml_commons": {
"opensearch_ml_deploy": {
"queue_size": "10",
"size": "9"
},
"opensearch_ml_execute": {
"queue_size": "10",
"size": "9"
},
"opensearch_ml_register": {
"queue_size": "10",
"size": "9"
},
"opensearch_ml_train": {
"queue_size": "10",
"size": "9"
},
"opensearch_ml_predict": {
"queue_size": "10000",
"size": "20"
},
"opensearch_ml_general": {
"queue_size": "100",
"size": "9"
}
}
}
}
}
Make sure to adjust the size as needed.

In some cases, users may express their dissatisfaction with certain aspects of the application, specifically its rather slow performance. Upon initial troubleshooting, it has been found that one possible reason for this sluggishness could be the calls made to models. A nice approach to investigate this further is by utilizing the Profile API provided by the ML Commons Framework.
To use the Profile API to investigate the performance of your models, use the following command:
1
GET /_plugins/_ml/profile/models
You should see an output similar to this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
"nodes": {
"QIpgbLWFSwyTFtWz5j-OvA": {
"models": {
"s_kvA4wBfndRacpb8I1Y": {
"model_state": "DEPLOYED",
"predictor": "org.opensearch.ml.engine.algorithms.remote.RemoteModel@687c2ebe",
"target_worker_nodes": [
"QIpgbLWFSwyTFtWz5j-OvA"
],
"worker_nodes": [
"QIpgbLWFSwyTFtWz5j-OvA"
],
"model_inference_stats": {
"count": 8,
"max": 2322.292209,
"min": 469.437416,
"average": 1250.456260875,
"p50": 1197.6908130000002,
"p90": 1667.658159,
"p99": 2256.828804
},
"predict_request_stats": {
"count": 8,
"max": 2324.38096,
"min": 471.412834,
"average": 1252.4851045,
"p50": 1199.588,
"p90": 1669.7088755,
"p99": 2258.9137515499997
}
}
}
}
}
}
Note the hierarchical structure of this output. The analysis is broken down on a per-node basis, followed by a per-model basis. Then, for each deployed model, there are two groups: model_inference_stats and predict_request_stats. The former deals with the actual inferences executed by the model, whereas the latter deals with the predictions made to the model. Your troubleshooting exercise should consider the computed values of the metrics for each group, given the amount of requests displayed in the field count. It should give a nice idea if the models are indeed the culprit.
You may note a possible discrepancy in the value reported by the field count and the actual number of requests executed. This may happen because the Profile API monitors the last 100 requests. To change the number of monitoring requests, update the following cluster setting:
1
2
3
4
5
6
PUT _cluster/settings
{
"persistent" : {
"plugins.ml_commons.monitoring_request_count" : 1000000
}
}

Searching data with OpenSearch presents a greater level of complexity compared to querying a relational database. The reason behind this lies in OpenSearch's shared-nothing architecture, which distributes documents across various shards. Consequently, when initiating a search request in OpenSearch, the execution process becomes more intricate since one remains unaware of which documents will align with the query and their respective storage locations. This is the reason OpenSearch applies the query-then-fetch approach. In a nutshell, here is how it works.
In the initial query phase, the query is sent to each shard in the index. Each shard performs the search and generates a queue of matching documents. This helps identify the documents that meet the search criteria. However, we still need to retrieve the actual documents themselves in the fetch phase. In this phase, the coordinating node decides which documents to fetch. These documents may come from one or multiple shards involved in the original search. The coordinating node sends a request to the relevant shard copy, which then loads the document bodies into the _source field. Once the coordinating node has gathered all the results, it combines them into a unified response to send back to the client.
Executing search requests in OpenSearch can be complicated due to its complex distributed system. Various parts of the system can fail, become slow, and result in poor performance. This means you need to have something in your pocket to when issues related to performance occur, and if you integrate models with search requests, this can surely occur. For instance, in the part three of this series, you saw that you can leverage models in conjunction with neural requests to create really amazing content experiences out of your data. If you ever find yourself in a situation where you are suspecting that models may be slowing down your searches, you can leverage the Profile API to troubleshoot your search requests.
Getting started with the Profile API is quite simple: just add the sentence "profile": true to your search body request. For example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
GET /nlp_pqa_2/_search
{
"profile": true,
"_source": [ "question" ],
"size": 30,
"query": {
"neural": {
"question_vector": {
"query_text": "What is the meaning of life?",
"model_id": "-OnayIsBvAWGexYmHu8G",
"k": 30
}
}
}
}
You should receive an output similar to this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
{
"took": 774,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 4,
"relation": "eq"
},
"max_score": 1,
"hits": [
{
"_index": "nlp_pqa_2",
"_id": "1",
"_score": 1,
"_source": {
"question": "What is the meaning of life?"
}
},
{
"_index": "nlp_pqa_2",
"_id": "3",
"_score": 0.3856697,
"_source": {
"question": "How many legs does an Elephant have?"
}
},
{
"_index": "nlp_pqa_2",
"_id": "4",
"_score": 0.38426778,
"_source": {
"question": "How many legs does a Giraffe have?"
}
},
{
"_index": "nlp_pqa_2",
"_id": "2",
"_score": 0.34972358,
"_source": {
"question": "Does this work with xbox?"
}
}
]
},
"profile": {
"shards": [
{
"id": "[3mWnAgBCTvO_NM_zp2p_pg][nlp_pqa_2][2]",
"inbound_network_time_in_millis": 0,
"outbound_network_time_in_millis": 0,
"searches": [
{
"query": [
{
"type": "KNNQuery",
"description": "",
"time_in_nanos": 10847,
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 0,
"shallow_advance_count": 0,
"set_min_competitive_score": 0,
"next_doc": 0,
"match": 0,
"next_doc_count": 0,
"score_count": 0,
"compute_max_score_count": 0,
"compute_max_score": 0,
"advance": 0,
"advance_count": 0,
"score": 0,
"build_scorer_count": 0,
"create_weight": 10847,
"shallow_advance": 0,
"create_weight_count": 1,
"build_scorer": 0
}
}
],
"rewrite_time": 6965,
"collector": [
{
"name": "SimpleTopScoreDocCollector",
"reason": "search_top_hits",
"time_in_nanos": 6605
}
]
}
],
"aggregations": []
},
{
"id": "[3mWnAgBCTvO_NM_zp2p_pg][nlp_pqa_2][3]",
"inbound_network_time_in_millis": 0,
"outbound_network_time_in_millis": 0,
"searches": [
{
"query": [
{
"type": "KNNQuery",
"description": "",
"time_in_nanos": 79843642,
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 0,
"shallow_advance_count": 0,
"set_min_competitive_score": 0,
"next_doc": 615,
"match": 0,
"next_doc_count": 1,
"score_count": 1,
"compute_max_score_count": 0,
"compute_max_score": 0,
"advance": 1822,
"advance_count": 1,
"score": 4185,
"build_scorer_count": 2,
"create_weight": 10888,
"shallow_advance": 0,
"create_weight_count": 1,
"build_scorer": 79826132
}
}
],
"rewrite_time": 2486,
"collector": [
{
"name": "SimpleTopScoreDocCollector",
"reason": "search_top_hits",
"time_in_nanos": 40952
}
]
}
],
"aggregations": []
},
{
"id": "[3mWnAgBCTvO_NM_zp2p_pg][nlp_pqa_2][4]",
"inbound_network_time_in_millis": 0,
"outbound_network_time_in_millis": 0,
"searches": [
{
"query": [
{
"type": "KNNQuery",
"description": "",
"time_in_nanos": 81504014,
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 0,
"shallow_advance_count": 0,
"set_min_competitive_score": 0,
"next_doc": 1321,
"match": 0,
"next_doc_count": 1,
"score_count": 1,
"compute_max_score_count": 0,
"compute_max_score": 0,
"advance": 435,
"advance_count": 1,
"score": 16599,
"build_scorer_count": 2,
"create_weight": 76898,
"shallow_advance": 0,
"create_weight_count": 1,
"build_scorer": 81408761
}
}
],
"rewrite_time": 3020,
"collector": [
{
"name": "SimpleTopScoreDocCollector",
"reason": "search_top_hits",
"time_in_nanos": 45490
}
]
}
],
"aggregations": []
},
{
"id": "[BP2uaV4iScmS_zRntM65AQ][nlp_pqa_2][0]",
"inbound_network_time_in_millis": 1,
"outbound_network_time_in_millis": 2,
"searches": [
{
"query": [
{
"type": "KNNQuery",
"description": "",
"time_in_nanos": 102327857,
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 0,
"shallow_advance_count": 0,
"set_min_competitive_score": 0,
"next_doc": 509,
"match": 0,
"next_doc_count": 1,
"score_count": 1,
"compute_max_score_count": 0,
"compute_max_score": 0,
"advance": 903,
"advance_count": 1,
"score": 2298,
"build_scorer_count": 2,
"create_weight": 57221,
"shallow_advance": 0,
"create_weight_count": 1,
"build_scorer": 102266926
}
}
],
"rewrite_time": 8032,
"collector": [
{
"name": "SimpleTopScoreDocCollector",
"reason": "search_top_hits",
"time_in_nanos": 26020
}
]
}
],
"aggregations": []
},
{
"id": "[BP2uaV4iScmS_zRntM65AQ][nlp_pqa_2][1]",
"inbound_network_time_in_millis": 1,
"outbound_network_time_in_millis": 5,
"searches": [
{
"query": [
{
"type": "KNNQuery",
"description": "",
"time_in_nanos": 99278876,
"breakdown": {
"set_min_competitive_score_count": 0,
"match_count": 0,
"shallow_advance_count": 0,
"set_min_competitive_score": 0,
"next_doc": 1305,
"match": 0,
"next_doc_count": 1,
"score_count": 1,
"compute_max_score_count": 0,
"compute_max_score": 0,
"advance": 1920,
"advance_count": 1,
"score": 17296,
"build_scorer_count": 2,
"create_weight": 57394,
"shallow_advance": 0,
"create_weight_count": 1,
"build_scorer": 99200961
}
}
],
"rewrite_time": 7244,
"collector": [
{
"name": "SimpleTopScoreDocCollector",
"reason": "search_top_hits",
"time_in_nanos": 53085
}
]
}
],
"aggregations": []
}
]
}
}
Note how the response was returned with an additional field called profile containing some interesting data about the execution of individual components of the search request. Analyzing this data allows you to debug slower requests and understand how to improve their performance. The trick here is to cross reference the time taken by the models and the time spent in the actual search execution. The time taken by the model can be measured with the profile approach from the previous section.

There is one troubleshooting technique that must be your first instinct when dealing with problems reported by developers using OpenSearch. Always check the HTTP code returned. As you may know, everything OpenSearch does it provided for developers via REST APIs. For this reason, there will be always an HTTP code for you to check. This is important because depending of the HTTP code returned—you may save hours of troubleshooting just by figuring out that an error may not necessarily be an error.
A good example of this may be situations where a request may look like as failed but in reality; the request was sent from a user that has no permissions for that request. For those requests, if you receive an 401 or 403 HTTP codes—this means that the request is as successful until the point where the user credentials were verified and the user permissions were put in check. This is actually good news since you won't have to investigate the said error. You just need to investigate if the resource being used should or should not be given to the user.
For instance, consider the support provided by the ML Commons Framework to model access control. This may explain why every time a user tries to register or deploy a model is failing. It may be the case of the model is trying to use a model group whose access model is restricted, or one that is intentionally not visible to you. This may happen because a model group can be created with restricted access to certain users, using organizational conventions called backend_roles that prohibit certain users to access it. To illustrate this, see the group model_group_test below.
1
2
3
4
5
6
7
POST /_plugins/_ml/model_groups/_register
{
"name": "model_group_test",
"description": "This is an example description",
"access_mode": "restricted",
"backend_roles" : ["data_scientists", "administrators"]
}
Here, any developer who tries deploying a model belonging to the group model model_group_test and not part of the roles data_scientists and admins won't be able to complete the deployment request successfully.

As stated in the beginning of this blog post, it is unrealistic to assume that everything will be perfect. If you went through all sections and still find yourself without a clue about any issues with the ML Commons Framework, you can pursue one last option:
🐞 Debugging the source code for the project.
Now, I understand if you feel uncomfortable with this if you are not a software engineer. But hopefully the instructions below will guide you in the right direction so you can accomplish this fearsome task. However, I believe it will pay your efforts off in the end. Debugging the source code of the ML Commons Framework is the best way for you to understand things, at an implementation level, the behavior that may be hunting your applications.
Before moving further, make sure that you take care of the following dependencies:
Once the dependencies are taken care, you can fork the project on GitHub. Go to the project URL and fork the project. Then, retrieve the URL of your fork so you can clone it locally.
With the project URL of your fork, you can start cloning the project locally. There are many ways to do this, including doing it in a terminal using the git command. However, for this debugging exercise, you will need to use an IDE to watch the execution of the code. For this reason, it is a better idea to start the cloning process with your IDE. I will show you examples using both IntelliJ IDEA and Visual Studio Code.

With IntelliJ, once you clone your project fork the IDE will automatically trigger the execution of the Gradle build, which will get your project ready to be used. This process may take some time, depending of your resources in your computer. Give it some time to finish. Then, you can start the configuration of the remote debugging.
Create a new Run/Debug configuration of the type Remote JVM Debug. Name it with something meaningful. Set the debugger mode option to Listen to remote JVM, and select the Auto restart check box. Apply the configuration then click in Debug.
This will keep IntelliJ in listening mode, waiting for the JVM containing the debugging port to start. For this, you will need to start an instance of OpenSearch containing the ML Commons Framework. Good news is that the project contains everything you need for this. Just open a new terminal and type:
1
./gradlew run --debug-jvm
It may take a while for the code to finish building and the instance to start. But once it does, you will be ready to start your debugging exercise. At this point, any breakpoints you set in the source code will be called by the debugger once the code reaches that point.
Ideally, you should know the source code from the top of your head to start a debugging exercise. After all, you must know where to look if you suspect something about the codebase. But you don't need to spend lots of time studying the ML Commons Framework source code. You can start by the actions that are triggered every time you send a REST command to train, deploy, and run inferences in models. These actions can be found in the plugin folder of the project. Specifically, navigate to the following folder:
1
${PROJECT_DIR}/plugin/src/main/java/org/opensearch/ml/action
There, you will find packages containing all the entities that you are likely be familiar with. For this example, let's see how you could debug a request to register a new model group. Open the Java class TransportRegisterModelGroupAction in the editor, and create a breakpoint in the first line after the declaration of the method doExecute().
Now you can send a REST API call to OpenSearch to register a new model group:
1
2
3
4
5
POST /_plugins/_ml/model_groups/_register
{
"name": "amazon_bedrock_models",
"description": "Model group for Amazon Bedrock models"
}
...and IntelliJ will listen to the exact moment when the JVM executes that request and stop the code for you right where you have set the breakpoint.
🎥 Here is an end to end demo with the instructions given so far for you to follow along.

With Visual Studio Code (VSCode), once you clone your project fork the IDE will automatically trigger the execution of the Gradle build, which will get your project ready to be used. This process may take some time, depending of your resources in your computer. Give it some time to finish. Then, you can start the configuration of the remote debugging.
Because VSCode's Java debugger has limited options when compared to IntelliJ, you will need to use a different approach to attach the debugger to the remote JVM. VSCode doesn't allow you to listen to a remote JVM, which causes the Gradle build to fail because it won't find anything listening to the port 5005.
As such, you will need to configure your own OpenSearch instance with remote debugging enabled. The easiest way to create a new instance of OpenSearch is using Docker. Create a new Docker Compose file and add the following code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
version: '3'

services:
opensearch:
image: opensearchproject/opensearch:2.11.1
container_name: opensearch
hostname: opensearch
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node
- discovery.type=single-node
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms2g -Xmx2g"
- "DISABLE_INSTALL_DEMO_CONFIG=true"
- "DISABLE_SECURITY_PLUGIN=true"
- "OPENSEARCH_JAVA_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- 9200:9200
- 9600:9600
- 5005:5005
healthcheck:
interval: 20s
retries: 10
test: ["CMD-SHELL", "curl -s http://localhost:9200"]

networks:
default:
name: opensearch_network
This code contains one OpenSearch instance properly configured to accept debugging requests over the port 5005, as you can see on line 16. Start this instance by running the command:
1
docker compose up -d
Now, create a new file called launch.json in the .vscode folder and add the follow JSON code.
1
2
3
4
5
6
7
8
9
10
11
12
{
"version": "0.2.0",
"configurations": [
{
"name": "Debug ML Commons",
"type": "java",
"request": "attach",
"hostName": "localhost",
"port": 5005
}
]
}
Ideally, you should know the source code from the top of your head to start a debugging exercise. After all, you must know where to look if you suspect something about the codebase. But you don't need to spend lots of time studying the ML Commons Framework source code. You can start by the actions that are triggered every time you send a REST command to train, deploy, and run inferences in models. These actions can be found in the plugin folder of the project. Specifically, navigate to the following folder:
1
${PROJECT_DIR}/plugin/src/main/java/org/opensearch/ml/action
There, you will find packages containing all the entities that you are likely be familiar with. For this example, let's see how you could debug a request to register a new model group. Open the Java class TransportRegisterModelGroupAction in the editor, and create a breakpoint in the first line after the declaration of the method doExecute().
You are ready to configure VSCode to attach its debugger into OpenSearch. Go to the Run and Debug section and click in the ▶️ button right next to the option Debug ML Commons.
Now you can send a REST API call to OpenSearch to register a new model group:
1
2
3
4
5
POST /_plugins/_ml/model_groups/_register
{
"name": "amazon_bedrock_models",
"description": "Model group for Amazon Bedrock models"
}
...and VSCode will listen to the exact moment when the JVM executes that request and stop the code for you right where you have set the breakpoint.
🎥 Here is an end to end demo with the instructions given so far for you to follow along.

The models feature opens up the window to exciting use cases where data can be magnified by the power of ML models and Generative AI. Tied with the simplicity of OpenSearch, you can enable your teams to create cutting-edge applications with very low effort.
I hope you have enjoyed reading this series and please make sure to share this content within your social media circle so others could benefit from the same. If you want to discover more about the amazing world of Generative AI, take a look at this space and don't forget to subscribe to the AWS Developers YouTube channel. I'm sure you will be amazed by the new content to come. Finally, follow me on LinkedIn if you want to geek out about technologies in general.
See you, next time!