Waste free coding
This article documents solving a meaningful event processing problem in a highly efficient manner through the reduction of waste in the software stack.
Java is often seen as a memory hog that cannot operate efficiently in low memory environments. The aim is to demonstrate what many think is impossible, that a meaningful java program can operate in almost no memory. The example processes 2.2 million csv records per second in a 3MB heap with zero gc on a single thread in Java.
You will learn where the main areas of waste exist in a java application and the patterns that can be employed to reduce them. The concept of zero cost abstraction is introduced, and that many optimisations can be automated at compile time through code generation. A maven plugin simplifies the developer workflow.
Our goal is not high performance, that comes as a by-product of maximising efficiency. The solution employs Fluxtion which uses a fraction of the resources compared with existing java event processing frameworks.
On a panel session from infoq 2019 in London, Martin Thompson spoke passionately about building energy efficiency computing systems. He noted controlling waste is the critical factor in minimising energy consumption. Martin's comments resonated with me, as the core philosophy behind Fluxtion is to remove unnecessary resource consumption. That panel session was the inspiration for this article.
Executing the test for 4 million rows the summary results are:
The functional description is converted into an efficient imperative form for execution. A generated event processor, SymbolTradeMonitor is the entry point for AssetPrice and Deal events. Generated helper classes are used by the event processor to calculate the aggregates, the helper classes are here.
The processor receives events from the partitioner and invokes helper functions to extract data and call calculation functions, storing aggregate results in nodes. Aggregate values are pushed into fields of the results instance, AssetTradePos. No intermediate objects are created, any primitive calculation is handled without auto-boxing. Calculation nodes reference data from parent instances, no data objects are moved around the graph during execution. Once the graph is initialised there are no memory allocations when an event is processed.
An image representing the processing graph for an asset calculation is generated at the same time as the code, seen below::
A similar set of calculations is described for the portfolio in the FluxtionBuilderbuilder class buildPortfolioAnalyser method, generating a PortfolioTradeMonitor event handler. The AssetTradePos is published from a SymbolTradeMonitor to the PortfolioTradeMonitor. The genertated files for the portfolio calculations are located here.
Following this approach high performance solutions with low resource consumption are within the grasp of the average programmer. Traditionally only specialist engineers with many years of experience could achieve these results.
Although novel in Java this approach is familiar in other languages, commonly known as zero cost abstraction.
With today's cloud based computing environments resources are charged per unit consumed. Any solution that saves energy will also have a positive benefit on the bottom line of the company.
Java is often seen as a memory hog that cannot operate efficiently in low memory environments. The aim is to demonstrate what many think is impossible, that a meaningful java program can operate in almost no memory. The example processes 2.2 million csv records per second in a 3MB heap with zero gc on a single thread in Java.
You will learn where the main areas of waste exist in a java application and the patterns that can be employed to reduce them. The concept of zero cost abstraction is introduced, and that many optimisations can be automated at compile time through code generation. A maven plugin simplifies the developer workflow.
Our goal is not high performance, that comes as a by-product of maximising efficiency. The solution employs Fluxtion which uses a fraction of the resources compared with existing java event processing frameworks.
Computing and the climate
Climate change and its causes are currently of great concern to many. Computing is a major source of emissions, producing the same carbon footprint as the entire airline industry. In the absence of regulation dictating computing energy consumption we, as engineers, have to assume the responsibility for producing efficient systems balanced against the cost to create them.On a panel session from infoq 2019 in London, Martin Thompson spoke passionately about building energy efficiency computing systems. He noted controlling waste is the critical factor in minimising energy consumption. Martin's comments resonated with me, as the core philosophy behind Fluxtion is to remove unnecessary resource consumption. That panel session was the inspiration for this article.
Processing requirements
Requirements for the processing example are:- Operate in 3MB of heap with zero gc
- Use standard java libraries only, no "unsafe" optimisations
- Read a CSV file containing millions of rows of input data
- Input is a set of unknown events, no pre-loading of data
- Data rows are heterogeneous types
- Process each row to calculate multiple aggregate values
- Calculations are conditional on the row type and data content
- Apply rules to aggregates and count rule breaches
- Data is randomly distributed to prevent branch prediction
- Partition calculations based on row input values
- Collect and group partitioned calculations into an aggregate view
- Publish a summary report at the end of file
- Pure Java solution using high level functions
- No JIT warm-up
Example position and profit monitoring
The CSV file contains trades and prices for a range of assets, one record per row. Position and profit calculations for each asset are partitioned in their own memory space. Asset calculations are updated on every matching input event. Profits for all assets will be aggregated into a portfolio profit. Each asset monitors its current position/profit state and record a count if either breaches a pre-set limit. The profit of the portfolio will be monitored and loss breaches counted.
Rules are validated at asset and portfolio level for each incoming event. Counts of rule breaches are updated as events are streamed into the system.
Rules are validated at asset and portfolio level for each incoming event. Counts of rule breaches are updated as events are streamed into the system.
Row data types
AssetPrice - [price: double] [symbol: CharSequence]
Deal - [price: double] [symbol: CharSequence] [size: int]
Deal - [price: double] [symbol: CharSequence] [size: int]
Sample data
The CSV file has a header lines for each type to allow dynamic column position to field mapping. Each row is preceded with the simple class name of the target type to marshal into. A sample set of records including header:
Deal,symbol,size,price
AssetPrice,symbol,price
AssetPrice,FORD,15.0284
AssetPrice,APPL,16.4255
Deal,AMZN,-2000,15.9354
Calculation description
Asset calculations are partitioned by symbol and then gathered into a portfolio calculation.
Partitioned asset calculations
asset position = sum(Deal::size)
deal cash value = (Deal::price) X (Deal::size) X -1
cash position = sum(deal cash value)
mark to market = (asset position) X (AssetPrice::price)
profit = (asset mark to market) + (cash position)
Portfolio calculations
portfolio profit = sum(asset profit)
Monitoring rules
asset loss > 2,000
asset position outside of range +- 200
portfolio loss > 10,000
NOTE:
To run the sample: clone from git and in the root of the trading-monitor project run the jar file in the dist directory to generate a test data file of 4 million rows.
git clone --branch article_may2019 https://github.com/gregv12/articles.git
cd articles/2019/may/trading-monitor/
jdk-12.0.1\bin\java.exe -jar dist\tradingmonitor.jar 4000000
By default the tradingmonitor.jar processes the data/generated-data.csv file. Using the command above the input data should have 4 million rows and be 94MB in length ready for execution.
NOTE:
- A count is made when a notifier indicates a rule breach. The notifier only fires on the first breach until it is reset. The notifier is reset when the rule becomes valid again.
- A positive deal::size is a buy, a negative value a sell.
Execution environment
To ensure memory requirements are met (zero gc and 3MB heap) the Epsilon no-op garbage collector is used, with a max heap size of 3MB. If more than 3MB of memory is allocated throughout the life of the process, the JVM will immediately exit with an out of memory error.To run the sample: clone from git and in the root of the trading-monitor project run the jar file in the dist directory to generate a test data file of 4 million rows.
git clone --branch article_may2019 https://github.com/gregv12/articles.git
cd articles/2019/may/trading-monitor/
jdk-12.0.1\bin\java.exe -jar dist\tradingmonitor.jar 4000000
By default the tradingmonitor.jar processes the data/generated-data.csv file. Using the command above the input data should have 4 million rows and be 94MB in length ready for execution.
Results
To execute the test run the tradingmonitor.jar with no arguments:
jdk-12.0.1\bin\java.exe -verbose:gc -Xmx3M -XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC -jar dist\tradingmonitor.jar
Executing the test for 4 million rows the summary results are:
Process row count = 4 million
Processing time = 1.815 seconds
Avg row exec time = 453 nano seconds
Process rate = 2.205 million records per second
garbage collections = 0
allocated mem total = 2857 KB
allocated mem per run = 90 KB
OS = windows 10
Processor = Inte core i7-7700@3.6Ghz
Memory = 16 GB
Disk = 512GB Samsung SSD PM961 NVMe
NOTE: Results are from the first run without JIT warmup. After jit warmup the code execution times are approx 10% quicker. Total allocated memory is 2.86Mb which includes starting the JVM.
Analysing Epsilon's output we estimate the app allocates 15% of memory for 6 runs, or 90KB per run. There is a good chance the application data will fit inside L1 cache, more investigations are required here.
Processing time = 1.815 seconds
Avg row exec time = 453 nano seconds
Process rate = 2.205 million records per second
garbage collections = 0
allocated mem total = 2857 KB
allocated mem per run = 90 KB
OS = windows 10
Processor = Inte core i7-7700@3.6Ghz
Memory = 16 GB
Disk = 512GB Samsung SSD PM961 NVMe
NOTE: Results are from the first run without JIT warmup. After jit warmup the code execution times are approx 10% quicker. Total allocated memory is 2.86Mb which includes starting the JVM.
Analysing Epsilon's output we estimate the app allocates 15% of memory for 6 runs, or 90KB per run. There is a good chance the application data will fit inside L1 cache, more investigations are required here.
Output
The test program loops 6 times printing out the results each time, Epsilon records memory statistics at the end of the run.Waste hotspots
The table below identifies functions in the processing loop that traditionally create waste and waste avoidance techniques utilized in the example.
Function | Source of waste | Effect | Avoidance |
---|---|---|---|
Read CSV file | Allocate a new String for each row | GC | Read each byte into a flyweight and process in allocation free decoder |
Data holder for row | Allocate a data instance for each row | GC | Flyweight single data instance |
Read col values | Allocate an array of Strings for each column | GC | Push chars into a re-usable char buffer |
Convert value to type | String to type conversions allocate memory | GC | Zero allocation converters CharSequence in place of Strings |
Push col value to holder | Autoboxing for primitive types allocates memory. | GC | Primitive aware functions push data. Zero allocation |
Partitioning data processing | Data partitions process in parallel. Tasks allocated to queues | GC / Lock | Single thread processing, no allocation or locks |
Calculations | Autoboxing, immutable types allocating intermediate instances. State free functions require external state storage and allocation | GC | Generate functions with no autoboxing. Stateful functions zero allocation |
Gathering summary calc | Push results from partition threads onto queue. Requires allocation and synchronization | GC / Lock | Single thread processing, no allocation or locks |
Waste reduction solutions
The code that implements the event processing is generated using Fluxtion. Generating a solution allows for a zero cost abstraction approach where the compiled solution has a minimum of overhead. The programmer describes the desired behaviour and at build time an optimised solution is generated that meets the requirements. For this example the generated code can be viewed here.
The maven pom contains a profile for rebuilding the generated files using the Fluxtion maven plugin executed with the following command:
The maven pom contains a profile for rebuilding the generated files using the Fluxtion maven plugin executed with the following command:
mvn -Pfluxtion install
File reading
Data is extracted from the input file as a series of CharEvents, and published to the csv type marshaller. Each character is individually read from the file and pushed into a CharEvent. As the same CharEvent instance is re-used no memory is allocated after initialisation. The logic for streaming CharEvents is located in the CharStreamer class. The whole 96 MB file can be read with almost zero memory allocated on the heap by the application.
CSV processing
Adding a @CsvMarshaller to a javabean notifies Fluxtion to generate a csv parser at build time. Fluxtion scans application classes for the @CsvMarshaller annotation and generates marshallers as part of the build process. For an example see AssetPrice.java which results in the generation of AssetPriceCsvDecoder0. The decoder processes CharEvents and marshalls the row data into a target instance.
The generated CSV parsers employ the strategies outlined in the table above avoiding any unnecessary memory allocation and re-using object instances for each row processed:
The generated CSV parsers employ the strategies outlined in the table above avoiding any unnecessary memory allocation and re-using object instances for each row processed:
- A single re-usable instance of a character buffers stores the row characters
- A flyweight re-usable instance is the target for marshalled column data
- Conversions are performed directly from a CharSequence into target types without intermediate object creation.
- If CharSequence's are used in the target instance then no Strings are created, a flyweight Charsequence is used.
Calculations
This builder describes the asset calculation using the Fluxtion streaming api. The declarative form is similar to the Java stream api, but builds real time event processing graphs. Methods marked with the annotation @SepBuilder are invoked by the maven plugin to generate a static event processor. The code below describes the calculations for an asset, see FluxtionBuilder:
The functional description is converted into an efficient imperative form for execution. A generated event processor, SymbolTradeMonitor is the entry point for AssetPrice and Deal events. Generated helper classes are used by the event processor to calculate the aggregates, the helper classes are here.
The processor receives events from the partitioner and invokes helper functions to extract data and call calculation functions, storing aggregate results in nodes. Aggregate values are pushed into fields of the results instance, AssetTradePos. No intermediate objects are created, any primitive calculation is handled without auto-boxing. Calculation nodes reference data from parent instances, no data objects are moved around the graph during execution. Once the graph is initialised there are no memory allocations when an event is processed.
An image representing the processing graph for an asset calculation is generated at the same time as the code, seen below::
asset processing graph |
A similar set of calculations is described for the portfolio in the FluxtionBuilderbuilder class buildPortfolioAnalyser method, generating a PortfolioTradeMonitor event handler. The AssetTradePos is published from a SymbolTradeMonitor to the PortfolioTradeMonitor. The genertated files for the portfolio calculations are located here.
Partitioning and gathering
All calculations, partitioning and gathering operations happen in the same single thread, no locks are required. Immutable objects are not required as there are no concurrency issues to handle. The marshalled events have an isolated private scope, allowing safe re-use of instances as the generated event processors control the lifecycle of the instances during event processing.System data flow
The diagram below shows the complete data flow for the system from bytes on a disk to the published summary report. The purple boxes are generated as part of the build, blue boxes are re-usable classes.Conclusion
In this article I have shown it is possible to solve a complex event handling problem in java with almost no waste. High level functions were utilised in a declarative/functional approach to describe desired behaviour and the generated event processors meet the requirements of the description. A simple annotation triggered marshaller generation. The generated code is simple imperative code that the JIT can optimise easily. No unnecessary memory allocations are made, and instances are re-used as much as possible.Following this approach high performance solutions with low resource consumption are within the grasp of the average programmer. Traditionally only specialist engineers with many years of experience could achieve these results.
Although novel in Java this approach is familiar in other languages, commonly known as zero cost abstraction.
With today's cloud based computing environments resources are charged per unit consumed. Any solution that saves energy will also have a positive benefit on the bottom line of the company.
I too have created substantial multithreaded Java programs with no garbage created in critical code. I used threadlocal resource pools for objects that would otherwise be garbage collected. I created my own threadlocal String pool which has the performance benefit of allowing object identity comparison rather than equals().
ReplyDeleteMy loop coding techniques use explicit index variables to avoid the Iterator that otherwise gets created...
for (int i =0, size=myList.size();i<size;i++) {
myList.get(i);
}
instead of
for (Object myListItem : myList) {
}
Hi Stephen,
DeleteThanks for the reply. When you have resources that have short lifespans in tight loops it can be best to re-use instances. In a multi-threaded environment the contention cost on the pool can become critical. Event htreadlocals have a cost.
My results on using a String cache are mixed, I have more determinism, but throughput does suffer. My experience you have to rigorously apply the optimisations throughout the whole system and then the benefits can outweigh the cost.
As always with performance improvements, test and make decisions from facts. But first question if you are measuring things the right way. http://highscalability.com/blog/2015/10/5/your-load-generator-is-probably-lying-to-you-take-the-red-pi.html
Hello Greg,
ReplyDeleteNice blog! I am editor at Java Code Geeks (www.javacodegeeks.com). We have the JCG program (see www.javacodegeeks.com/join-us/jcg/), that I think you’d be perfect for.
If you’re interested, send me an email to eleftheria.drosopoulou@javacodegeeks.com and we can discuss further.
Best regards,
Eleftheria Drosopoulou
Hi Ria,
Deletethank you for the positive comments and offer, I have replied to your mail. Chat further via e-mail
Greg
Really fun article! Enjoyed it quite a bit and shared it with my colleagues as well.
ReplyDeleteHi Carl-Erik,
DeleteThank you so much for such a positive comment. I am glad you enjoyed the article and shared it as well :) Can I ask what you liked in particular? I have a couple of blogs lined up now, is there anything you think would like to see more of?
Greg