Boa constrictor tames Graal VM



A lot of interesting things have happened in the Java world lately. One of such events was the release of the first production ready version of Graal VM.



Personally, Graal has long been a source of undisguised interest for me, and I closely follow the reports and the latest news in this area. At one time, I saw a report by Chris Talinger. In it, Chris talks about how Twitter managed to get significant performance gains by using machine learning algorithms to tune Graal. I had a strong desire to try something like that myself. In this article I want to share what happened in the end.  



Experiment



To implement the experiment, I needed:



  1. fresh Graal VM Community Edition. At the time of this writing, it is 20.2.0
  2. dedicated cloud environment for load testing
  3. NewRelic for collecting metrics
  4. test load generator
  5. a Python program and a set of scripts to implement the ML algorithm itself


,



A=(a1,a2,..,an)

f(x1,x2,..,xn) .



:



f=1/mean(CPUUtilization)



, .

, :



-Dgraal.MaximumInliningSize -Dgraal.TrivialInliningSize  -Dgraal.SmallCompiledLowLevelGraphSize


. ,

.



.

:



  1. JVM


.





Twitter . .



,

. , .



NewRelic



NewRelic REST API APP_ID API_KEY.

APP_ID β€” . APM.

API_KEY NewRelic.



:



{
  "metric_data": {
    "from": "time",
    "to": "time",
    "metrics_not_found": "string",
    "metrics_found": "string",
    "metrics": [
      {
        "name": "string",
        "timeslices": [
          {
            "from": "time",
            "to": "time",
            "values": "hash"
          }
        ]
      }
    ]
  }
}


:



def request_metrics(params):
    app_id = "APP_ID"
    url = "https://api.newrelic.com/v2/applications/"+ app_id + "/metrics/data.json"
    headers = {'X-Api-Key':"API_KEY"}
    response = requests.get(url, headers=headers, params=params)
    return response.json()


CPU Utilzation params :



params = {
    'names[]': "CPU/User/Utilization",
    'values[]': "percent",
    'from': timerange[0],
    'to': timerange[1],
    'raw': "false"
}


timerange .





def get_timeslices(response_json, value_name):
    metrics = response_json['metric_data']['metrics'][0]
    timeslices = metrics['timeslices']
    values = []
    for t in timeslices:
        values.append(t['values'][value_name])
    return values




β€” .



BayesianOptimization.



.



def objective_function(maximumInliningSize, trivialInliningSize, smallCompiledLowLevelGraphSize):
    update_config(int(maximumInliningSize), int(trivialInliningSize), int(smallCompiledLowLevelGraphSize))
    timerange = do_test()
    data = get_results(timerange)
    return calculate(data)


_updateconfig , . _dotest .



JVM . , .



calculate :



    value = 1 / (mean(filtered_data))




pbounds = {
            'maximumInliningSize': (200, 500),
           'trivialInliningSize': (10, 25),
           'smallCompiledLowLevelGraphSize': (200, 650)
           }


,



  optimizer.probe(
        params={"maximumInliningSize": 300.0,
                "trivialInliningSize": 10.0,
                "smallCompiledLowLevelGraphSize": 300.0},
        lazy=True,
        )




def autotune():
    pbounds = {
                'maximumInliningSize': (200, 500),
               'trivialInliningSize': (10, 25),
               'smallCompiledLowLevelGraphSize': (200, 650)
               }

    optimizer = BayesianOptimization(
        f=objective_function,
        pbounds=pbounds,
        random_state=1,
    )

    optimizer.probe(
    params={"maximumInliningSize": 300.0,
            "trivialInliningSize": 10.0,
            "smallCompiledLowLevelGraphSize": 300.0},
    lazy=True,
    )

    optimizer.maximize(
        init_points=2,
        n_iter=10,
    )

    print(optimizer.max)


_objectivefunction 12

, . , .



:



for i, res in enumerate(optimizer.res):
    print("Iteration {}: \n\t{}".format(i, res))


.



Iteration 0:
    {'target': 0.02612330198537095, 'params': {'maximumInliningSize': 300.0, 'smallCompiledLowLevelGraphSize': 300.0, 'trivialInliningSize': 10.0}}
Iteration 1:
    {'target': 0.02666666666666667, 'params': {'maximumInliningSize': 325.1066014107722, 'smallCompiledLowLevelGraphSize': 524.1460220489712, 'trivialInliningSize': 10.001715622260173}}
...




.



, CPU Utilization Graal





:





.







CPU 4-5%.



CPU , proof of concept

.



2 Java Graal 2 . Graal JVM , Scala Kotlin.




All Articles