2021-01-12 - What does Lucene Benchmark?

Lucene is a widely used Java library for free text search library. It powers more directly used search tools such as Solr and Elasticsearch, but is also used to add free text search to e.g. Neo4j. There is a set of nightly benchmarks set up by Mike McCandless that aims to make sure that there is no (unexpected) performance regressions. [1] [2]

The performance benchmarks test both indexing and searching. That there exists a recurring benchmark for such a performance-sensitive library is fantastic. That it is being actively used and acted upon by Lucene contributors is even better. People take action when performance degrades, and try to figure out why.

In this post, we'll be looking at profiling data, from Flight Recorder, of these benchmarks. [3] The data is from Lucene's master branch on 2021-01-10, commit 7e94a56e815f28419da61eaee1e13b92a9652338. I tried to re-create the nightly benchmarks to my best ability, but I might have made some mistakes.

I have run the benchmarks with Flight Recorder enabled, and analyzed the profiling data using Blunders (the page you are on). If you would like to profile your applications in real time, please reach out to me. But for now, let's dig into the data.

Indexing

Let's start with indexing. This benchmark indexes about 10M documents from the English Wikipedia. A first observation is that the CPU utilization starts at around 25% and rises to 50% for a while. 25% means 25% of the available CPU cores, which my not-exactly-brand-new desktop has 4 of (counting hyper threading).

We can conclude that whatever happens in the first part of the benchmark is most likely CPU-bound (on this machine) and single-threaded. The second part looks like it uses two threads. Feel free to click around in the chart a bit.

02040608010012:55:0013:00:0013:05:0013:10:0013:15:00

One other interesting observation from this chart is that the allocation rate (i.e. allocations of new objects in the JVM is rather manageable, mostly staying in the range of 170 to 300 Megabytes per second. The chart also shows garbage collections: you would probably have to zoom in to see them though, since the garbage collector (G1 in this case) has no issues keeping up with the allocation.

CPU Drill-down

So, where is that CPU spent? Lets drill down in an (upside down) flame graph. This is essentially a stack trace, where the parent (above) calls it's children (below). The width of the item shows how often the JVM spent time in that code path. You can click on items to see only that part of the chart.

Total
IndexThreads$IndexThread.run()
IndexWriter.addDocument(Iterable)
IndexWriter.updateDocument(Term, Iterable)
IndexWriter.updateDocuments(DocumentsWriterDeleteQueue$Node, Iterable)
DocumentsWriter.updateDocuments(Iterable, DocumentsWriterDeleteQueue$Node)
DocumentsWriter.postUpdate(DocumentsWriterPerThread, boolean)
DocumentsWriter.doFlush(DocumentsWriterPerThread)
DocumentsWriterPerThread.flush(DocumentsWriter$FlushNotifications)
IndexingChain.flush(SegmentWriteState)
IndexingChain.writeVectors(SegmentWriteState, Sorter$DocMap)
VectorValuesWriter.flush(Sorter$DocMap, VectorWriter)
Lucene90VectorWriter.writeField(FieldInfo, VectorValues)
Lucene90VectorWriter.writeGraph(IndexOutput, RandomAccessVectorValuesProducer, long, long, int)
HnswGraphBuilder.build(RandomAccessVectorValues)
HnswGraphBuilder.addGraphNode(float)
HnswGraph.search(float, int, int, RandomAccessVectorValues, KnnGraphValues, Random)
SparseFixedBitSet.<init>(int)
RamUsageEstimator.shallowSizeOf(Object)
RamUsageEstimator.shallowSizeOfArray(Object)
Collections$UnmodifiableMap.get(Object)
LinkedHashMap.get(Object)
HashMap.getNode(int, Object)
RamUsageEstimator.alignObjectSize(long)
NeighborQueue.add(int, float)
LongHeap.push(long)
ArrayUtil.grow(long, int)
ArrayUtil.oversize(int, int)
VectorValues$SearchStrategy.compare(float, float)
VectorUtil.dotProduct(float, float)
SparseFixedBitSet.get(int)
NeighborQueue.topScore()
LongHeap.top()
SparseFixedBitSet.set(int)
SparseFixedBitSet.insertLong(int, int, int, long)
NeighborQueue.insertWithOverflow(int, float)
LongHeap.insertWithOverflow(long)
LongHeap.updateTop(long)
LongHeap.downHeap(int)
VectorWriter$VectorValuesMerger$MergerRandomAccess.vectorValue(int)
NeighborQueue.pop()
LongHeap.pop()
LongHeap.downHeap(int)
HnswGraphBuilder.addDiverseNeighbors(int, NeighborQueue, RandomAccessVectorValues)
HnswGraphBuilder.diversityUpdate(NeighborArray, RandomAccessVectorValues)
HnswGraphBuilder.findNonDiverse(NeighborArray, RandomAccessVectorValues)
VectorValues$SearchStrategy.compare(float, float)
VectorUtil.dotProduct(float, float)
HnswGraphBuilder.selectDiverse(NeighborArray, NeighborArray, RandomAccessVectorValues)
HnswGraphBuilder.diversityCheck(float, float, NeighborArray, RandomAccessVectorValues)
VectorValues$SearchStrategy.compare(float, float)
VectorUtil.dotProduct(float, float)
FreqProxTermsWriter.flush(Map, SegmentWriteState, Sorter$DocMap, NormsProducer)
PerFieldPostingsFormat$FieldsWriter.write(Fields, NormsProducer)
BlockTreeTermsWriter.write(Fields, NormsProducer)
BlockTreeTermsWriter$TermsWriter.write(BytesRef, TermsEnum, NormsProducer)
PushPostingsWriterBase.writeTerm(BytesRef, TermsEnum, FixedBitSet, NormsProducer)
Lucene84PostingsWriter.startDoc(int, int)
FreqProxFields$FreqProxPostingsEnum.nextDoc()
Lucene84PostingsWriter.addPosition(int, BytesRef, int, int)
PForUtil.encode(long, DataOutput)
BlockTreeTermsWriter$TermsWriter.pushTerm(BytesRef)
BlockTreeTermsWriter$TermsWriter.writeBlocks(int, int)
TermsHashPerField.sortTerms()
BytesRefHash.sort()
MSBRadixSorter.sort(int, int)
MSBRadixSorter.sort(int, int, int, int)
MSBRadixSorter.radixSort(int, int, int, int)
MSBRadixSorter.sort(int, int, int, int)
MSBRadixSorter.radixSort(int, int, int, int)
MSBRadixSorter.sort(int, int, int, int)
MSBRadixSorter.radixSort(int, int, int, int)
MSBRadixSorter.sort(int, int, int, int)
MSBRadixSorter.radixSort(int, int, int, int)
MSBRadixSorter.sort(int, int, int, int)
MSBRadixSorter.computeCommonPrefixLengthAndBuildHistogram(int, int, int, int)
MSBRadixSorter.reorder(int, int, int, int, int)
DocumentsWriterPerThread.updateDocuments(Iterable, DocumentsWriterDeleteQueue$Node, DocumentsWriter$FlushNotifications)
IndexingChain.processDocument(int, Iterable)
IndexingChain.processField(int, IndexableField, long, int)
IndexingChain$PerField.invert(int, IndexableField, boolean)
TermsHashPerField.add(BytesRef, int)
BytesRefHash.add(BytesRef)
BytesRefHash.findHash(BytesRef)
BytesRefHash.equals(int, BytesRef)
ByteBlockPool.setBytesRef(BytesRef, int)
BytesRef.bytesEquals(BytesRef)
Arrays.equals(byte, int, int, byte, int, int)
BytesRefHash.doHash(byte, int, int)
StringHelper.murmurhash3_x86_32(byte, int, int, int)
BytesRefHash.rehash(int, boolean)
TermsHashPerField.positionStreamSlice(int, int)
FreqProxTermsWriterPerField.addTerm(int, int)
TermsHashPerField.writeVInt(int, int)
TermsHashPerField.writeByte(int, byte)
FreqProxTermsWriterPerField.writeProx(int, int)
TermsHashPerField.writeVInt(int, int)
TermsHashPerField.writeByte(int, byte)
FilteringTokenFilter.incrementToken()
LowerCaseFilter.incrementToken()
StandardTokenizer.incrementToken()
StandardTokenizerImpl.getNextToken()
Character.codePointAt(char, int, int)
Character.codePointAtImpl(char, int, int)
StandardTokenizerImpl.zzRefill()
AttributeSource.clearAttributes()
PackedTokenAttributeImpl.clear()
Field.tokenStream(Analyzer, TokenStream)
Field$StringTokenStream.<init>()
TokenStream.<init>()
AttributeSource.<init>(AttributeFactory)
Field$StringTokenStream.incrementToken()
AttributeSource.clearAttributes()
PackedTokenAttributeImpl.clear()
CharTermAttributeImpl.getBytesRef()
BytesRefBuilder.copyChars(char, int, int)
UnicodeUtil.UTF16toUTF8(char, int, int, byte)
LineFileDocs.nextDoc(LineFileDocs$DocState)
SimpleDateFormat.parse(String, ParsePosition)
SimpleDateFormat.subParse(String, int, int, int, boolean, boolean, ParsePosition, boolean, CalendarBuilder)
ConcurrentMergeScheduler$MergeThread.run()
ConcurrentMergeScheduler.doMerge(MergeScheduler$MergeSource, MergePolicy$OneMerge)
IndexWriter$IndexWriterMergeSource.merge(MergePolicy$OneMerge)
IndexWriter.merge(MergePolicy$OneMerge)
IndexWriter.mergeMiddle(MergePolicy$OneMerge, MergePolicy)
SegmentMerger.merge()
SegmentMerger.mergeWithLogging(SegmentMerger$VoidMerger, String, int)
SegmentMerger$$Lambda$148.2073157208.merge()
SegmentMerger.lambda$merge$4(SegmentWriteState)
SegmentMerger.mergeVectorValues(SegmentWriteState)
VectorWriter.merge(MergeState)
VectorWriter.mergeVectors(FieldInfo, MergeState)
Lucene90VectorWriter.writeField(FieldInfo, VectorValues)
Lucene90VectorWriter.writeGraph(IndexOutput, RandomAccessVectorValuesProducer, long, long, int)
HnswGraphBuilder.build(RandomAccessVectorValues)
HnswGraphBuilder.addGraphNode(float)
HnswGraph.search(float, int, int, RandomAccessVectorValues, KnnGraphValues, Random)
SparseFixedBitSet.<init>(int)
VectorValues$SearchStrategy.compare(float, float)
VectorUtil.dotProduct(float, float)
VectorWriter$VectorValuesMerger$MergerRandomAccess.vectorValue(int)
Lucene90VectorReader$OffHeapVectorValues.vectorValue(int)
Arrays.binarySearch(int, int)
NeighborQueue.add(int, float)
LongHeap.push(long)
ArrayUtil.grow(long, int)
SparseFixedBitSet.set(int)
HnswGraphBuilder.addDiverseNeighbors(int, NeighborQueue, RandomAccessVectorValues)
HnswGraphBuilder.selectDiverse(NeighborArray, NeighborArray, RandomAccessVectorValues)
HnswGraphBuilder.diversityCheck(float, float, NeighborArray, RandomAccessVectorValues)
SegmentMerger$$Lambda$142.1852991137.merge()
SegmentMerger.lambda$merge$1(SegmentWriteState, SegmentReadState)
SegmentMerger.mergeTerms(SegmentWriteState, SegmentReadState)
PerFieldPostingsFormat$FieldsWriter.merge(MergeState, NormsProducer)
FieldsConsumer.merge(MergeState, NormsProducer)
BlockTreeTermsWriter.write(Fields, NormsProducer)
BlockTreeTermsWriter$TermsWriter.write(BytesRef, TermsEnum, NormsProducer)
PushPostingsWriterBase.writeTerm(BytesRef, TermsEnum, FixedBitSet, NormsProducer)
MappedMultiFields$MappedMultiTermsEnum.postings(PostingsEnum, int)
MappingMultiPostingsEnum.<init>(String, MergeState)
Lucene84PostingsWriter.startDoc(int, int)
LineFileDocs$1.run()
LineFileDocs.readDocs()
BufferedReader.readLine()
BufferedReader.readLine(boolean)
BufferedReader.fill()
InputStreamReader.read(char, int, int)
StreamDecoder.read(char, int, int)
StreamDecoder.implRead(char, int, int)
CharsetDecoder.decode(ByteBuffer, CharBuffer, boolean)
UTF_8$Decoder.decodeLoop(ByteBuffer, CharBuffer)
UTF_8$Decoder.decodeArrayLoop(ByteBuffer, CharBuffer)
Indexer.main(String)
Indexer._main(String)
Indexer.countUniqueTerms(IndexWriter, String)
DirectoryReader.open(IndexWriter)
DirectoryReader.open(IndexWriter, boolean, boolean)
IndexWriter.getReader(boolean, boolean)
DocumentsWriter.flushAllThreads()
DocumentsWriter.doFlush(DocumentsWriterPerThread)
DocumentsWriterPerThread.flush(DocumentsWriter$FlushNotifications)
IndexingChain.flush(SegmentWriteState)
IndexingChain.writeVectors(SegmentWriteState, Sorter$DocMap)
VectorValuesWriter.flush(Sorter$DocMap, VectorWriter)
Lucene90VectorWriter.writeField(FieldInfo, VectorValues)
Lucene90VectorWriter.writeGraph(IndexOutput, RandomAccessVectorValuesProducer, long, long, int)
HnswGraphBuilder.build(RandomAccessVectorValues)
HnswGraphBuilder.addGraphNode(float)
HnswGraph.search(float, int, int, RandomAccessVectorValues, KnnGraphValues, Random)

In order to make sense of this drill-down, it helps to know that Lucene indexing works by first indexing documents into small, immutable, segments and then merging those segments into larger segments.

Knowing that, we can figure out that roughly 72% of the CPU time is spent in indexing the first segment and about 25% of the CPU time is spent merging. These numbers differ over, in the beginning much less merging happens. This also accounts for the different CPU usages in the chart above; in this benchmark, indexing is limited to one thread.

HNSWhat?

Interestingly, in both the initial indexing and merging a significant portion of the CPU (remember that we are CPU-bound) is spent in building something called a HNSW Graph. So, what is that?

Turns out that Hierarchical Navigable Small World graphs are data structures, useful for searching based on document similarity. They were recently merged into Lucene's master branch. They seem to have been added to the Lucene nightly benchmarks on 2020-12-09 and promptly cause the indexing throughput to drop abruptly. This is consistent with what the profiler tells us. [4]

That the HNSW graphs are included in the indexing benchmark is of course great news for anyone wanting to use them in the future; it highlights performance issues in the implementation. But it is probably a bit problematic that it is such a big part of the benchmark: any other indexing performance degradation or improvement would seem to be much less significant. So, for users who do not want HNSW graphs, the benchmarks become less useful.

HNSW.search (and it's children)

A large part of the time in building HNSW graphs seems to come from a method called HnswGraph.search, which seems to be finding neighbours to a document, in order to be able to insert the document in the correct place in the graph. The same method is actually used when at search-time.

A lot of time seems to be spent in dealing with SparseFixedBitSets, which here seems to be used to keep track of which nodes have already been seen (nodes are identified by java ints). If I were to try to improve the performance, I would explore if a simpler BitSet implementation might help performance. My understanding is that the BitSet's size is O(num_docs) in a segment (when used for indexing). A naive, dense BitSet uses O(entries) bits. Even for relative large indices, this might be fine if the BitSet was re-used when building the graph.

Another code path where a lot of time is spent is in doing a dot product to see how similar two documents are. The inner loop here is unrolled, indicating that the author probably knew that this was a hot loop. One thing that I am curious of is whether the author considered Math.fma - it should in theory be faster on CPUs with FMA instructions. [5]

Surprises

The observant reader might have noticed that part of the CPU time was reported when estimating the RAM usage for SparseFixedBitSets, by the LinkedHashMap.get method. This is weird, since the the RamUsageEstimator.shallowSizeOfArray method does not use a LinkedHashMap. It does use a IdentityHashMap - but those are not the same thing. Java Mission Control reports the same thing. This is rather disturbing - I believe that it is an instance of an OpenJDK bug JDK-8201516, which says that the debug symbols emitted by the JVM sometimes are wrong. [6]

So, can't we trust the Flight Recorder? It seems like no, not to 100% at least. On the other hand I have used it for many years and have been able to predict how to improve performance. My personal guess is that it likely is decently accurate from a birds-eye view, but that it might mess up a bit when the JIT starts inlining. But what do I know? I've never touched the OpenJDK source code. But maybe I should.

Searching

Search performance is arguably more important than indexing performance; after all, that's why we have indices (as opposed to just using grep). Searches are made on 2 cores (out of 4) in this benchmark, and the profiling data makes it evident that this is CPU-bound:

02040608010014:55:0015:00:0015:05:0015:10:0015:15:0015:20:0015:25:0015:30:0015:35:00

The test desktop is running a decently new SSD, but it's nothing fancy. Thus it might be surprising that CPU usage is what seems to be the most scarce resource; one might assume that we would be more IO-bound.

Since the search benchmarks tests many different kinds of queries, it's not obvious to point out a single source of slowness:

Total
TaskThreads$TaskThread.run()
SearchTask.go(IndexState)
IndexSearcher.search(Query, int)
IndexSearcher.searchAfter(ScoreDoc, Query, int)
IndexSearcher.search(Query, CollectorManager)
IndexSearcher.search(Query, Collector)
IndexSearcher.search(List, Weight, Collector)
BulkScorer.score(LeafCollector, Bits)
Weight$DefaultBulkScorer.score(LeafCollector, Bits, int, int)
Weight$DefaultBulkScorer.scoreAll(LeafCollector, DocIdSetIterator, TwoPhaseIterator, Bits)
PhraseScorer$1.matches()
SloppyPhraseMatcher.reset()
SloppyPhraseMatcher.initPhrasePositions()
SloppyPhraseMatcher.initSimple()
PhrasePositions.firstPosition()
PhrasePositions.nextPosition()
Lucene84PostingsReader$BlockImpactsPostingsEnum.nextPosition()
Lucene84PostingsReader$BlockImpactsPostingsEnum.skipPositions()
Lucene84PostingsReader$BlockImpactsPostingsEnum.refillPositions()
PForUtil.decode(DataInput, long)
ForUtil.decode(int, DataInput, long)
PriorityQueue.add(Object)
PriorityQueue.upHeap(int)
SloppyPhraseMatcher.nextMatch()
SloppyPhraseMatcher.advancePP(PhrasePositions)
PhrasePositions.nextPosition()
PriorityQueue.pop()
SloppyPhraseMatcher.maxFreq()
ExactPhraseMatcher.reset()
ConjunctionDISI.nextDoc()
ConjunctionDISI.doNext(int)
Lucene84PostingsReader$BlockImpactsPostingsEnum.advance(int)
Lucene84PostingsReader.findFirstGreater(long, int, int)
Lucene84PostingsReader$BlockImpactsPostingsEnum.refillDocs()
TermSpans.advance(int)
Lucene84PostingsReader$EverythingEnum.advance(int)
MultiLevelSkipListReader.skipTo(int)
Lucene84PostingsReader$EverythingEnum.refillDocs()
Lucene84PostingsReader$BlockImpactsPostingsEnum.nextDoc()
Lucene84PostingsReader$BlockImpactsPostingsEnum.advance(int)
BlockMaxConjunctionScorer$2.nextDoc()
BlockMaxConjunctionScorer$2.advance(int)
BlockMaxConjunctionScorer$2.doNext(int)
ImpactsDISI.advance(int)
Lucene84PostingsReader$BlockImpactsDocsEnum.advance(int)
Lucene84PostingsReader$BlockImpactsDocsEnum.refillDocs()
ImpactsDISI.advanceTarget(int)
ImpactsDISI.advanceShallow(int)
Lucene84PostingsReader$BlockImpactsDocsEnum.advanceShallow(int)
Lucene84ScoreSkipReader.skipTo(int)
MultiLevelSkipListReader.skipTo(int)
WANDScorer$1.advance(int)
WANDScorer.pushBackLeads(int)
ImpactsDISI.advance(int)
Lucene84PostingsReader$BlockImpactsDocsEnum.advance(int)
WANDScorer.moveToNextCandidate(int)
WANDScorer.updateMaxScoresIfNecessary(int)
BlockMaxConjunctionScorer$2.advanceTarget(int)
TopScoreDocCollector$SimpleTopScoreDocCollector$1.collect(int)
BlockMaxConjunctionScorer.score()
TermScorer.score()
LeafSimScorer.score(int, float)
PhraseScorer.score()
WANDScorer.score()
TermScorer.score()
SpanScorer.score()
WANDScorer$2.matches()
WANDScorer.advanceTail()
WANDScorer.advanceTail(DisiWrapper)
ImpactsDISI.advance(int)
Lucene84PostingsReader$BlockImpactsDocsEnum.advance(int)
Lucene84PostingsReader$BlockImpactsDocsEnum.refillDocs()
ImpactsDISI.advanceTarget(int)
WANDScorer.popTail()
IntervalScorer$1.matches()
IntervalFilter.nextInterval()
OrderedIntervalsSource$OrderedIntervalIterator.nextInterval()
TermIntervalsSource$1.nextInterval()
Lucene84PostingsReader$EverythingEnum.nextPosition()
Lucene84PostingsReader$EverythingEnum.skipPositions()
Lucene84PostingsReader$EverythingEnum.refillPositions()
PForUtil.decode(DataInput, long)
ByteBufferIndexInput$SingleBufferImpl.seek(long)
IntervalFilter.nextDoc()
ConjunctionIntervalIterator.nextDoc()
ConjunctionDISI.nextDoc()
ConjunctionDISI.doNext(int)
TermIntervalsSource$1.advance(int)
Lucene84PostingsReader$EverythingEnum.advance(int)
MultiLevelSkipListReader.skipTo(int)
Lucene84PostingsReader$EverythingEnum.refillDocs()
OrderedIntervalsSource$OrderedIntervalIterator.reset()
TermIntervalsSource$1.nextInterval()
Lucene84PostingsReader$EverythingEnum.nextPosition()
ConjunctionSpans$1.matches()
NearSpansOrdered.twoPhaseCurrentDocMatches()
NearSpansOrdered.stretchToOrder()
NearSpansOrdered.advancePosition(Spans, int)
TermSpans.nextStartPosition()
Lucene84PostingsReader$EverythingEnum.nextPosition()
Lucene84PostingsReader$EverythingEnum.skipPositions()
Lucene84PostingsReader$EverythingEnum.refillPositions()
PForUtil.decode(DataInput, long)
TermSpans.nextStartPosition()
Lucene84PostingsReader$EverythingEnum.nextPosition()
WANDScorer$1.nextDoc()
WANDScorer$1.advance(int)
WANDScorer.pushBackLeads(int)
WANDScorer.insertTailWithOverFlow(DisiWrapper)
WANDScorer.addTail(DisiWrapper)
WANDScorer.moveToNextCandidate(int)
ImpactsDISI.nextDoc()
ImpactsDISI.advance(int)
ConjunctionDISI.advance(int)
ConjunctionDISI.doNext(int)
Lucene84PostingsReader$BlockImpactsPostingsEnum.advance(int)
ImpactsDISI.advanceTarget(int)
MultiTermQueryConstantScoreWrapper$1.bulkScorer(LeafReaderContext)
MultiTermQueryConstantScoreWrapper$1.rewrite(LeafReaderContext)
DocIdSetBuilder.add(DocIdSetIterator)
FixedBitSet.or(DocIdSetIterator)
BitSet.or(DocIdSetIterator)
Lucene84PostingsReader$BlockDocsEnum.nextDoc()
IntersectTermsEnum.next()
Weight.bulkScorer(LeafReaderContext)
PointRangeQuery$1.scorer(LeafReaderContext)
PointRangeQuery$1$4.get(long)
BKDReader.intersect(PointValues$IntersectVisitor)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.addAll(BKDReader$IntersectState, boolean)
BKDReader.addAll(BKDReader$IntersectState, boolean)
BKDReader.addAll(BKDReader$IntersectState, boolean)
BKDReader.addAll(BKDReader$IntersectState, boolean)
BKDReader.addAll(BKDReader$IntersectState, boolean)
BKDReader.addAll(BKDReader$IntersectState, boolean)
IndexSearcher.rewrite(Query)
MultiTermQuery.rewrite(IndexReader)
TopTermsRewrite.rewrite(IndexReader, MultiTermQuery)
TermCollectingRewrite.collectTerms(IndexReader, MultiTermQuery, TermCollectingRewrite$TermCollector)
FuzzyTermsEnum.next()
IntersectTermsEnum.next()
IntersectTermsEnum._next()
IntersectTermsEnum.pushFrame(int)
FastTaxonomyFacetCounts.<init>(String, IndexReader, TaxonomyReader, FacetsConfig)
FastTaxonomyFacetCounts.countAll(IndexReader)
Lucene80DocValuesProducer$15.binaryValue()
DirectMonotonicReader.get(long)
DirectReader$DirectPackedReader12.get(long)
ByteBufferIndexInput$SingleBufferImpl.readShort(long)
ByteBufferGuard.getShort(ByteBuffer, int)
DirectByteBuffer.getShort(int)
ByteBufferIndexInput.readBytes(byte, int, int)
ByteBufferGuard.getBytes(ByteBuffer, byte, int, int)
DirectByteBuffer.get(byte, int, int)
ByteBuffer.get(byte, int, int)
DirectByteBuffer.get()
ByteBufferIndexInput$SingleBufferImpl.seek(long)
IntTaxonomyFacets.increment(int)
IntTaxonomyFacets.increment(int, int)
IndexSearcher.search(Query, Collector)
IndexSearcher.search(List, Weight, Collector)
BulkScorer.score(LeafCollector, Bits)
Weight$DefaultBulkScorer.score(LeafCollector, Bits, int, int)
Weight$DefaultBulkScorer.scoreAll(LeafCollector, DocIdSetIterator, TwoPhaseIterator, Bits)
FirstPassGroupingCollector.collect(int)
FirstPassGroupingCollector.isCompetitive(int)
FieldComparator$RelevanceComparator.compareBottom(int)
ScoreCachingWrappingScorer.score()
TermScorer.score()
LeafSimScorer.score(int, float)
BM25Similarity$BM25Scorer.score(float, long)
LeafSimScorer.getNormValue(int)
Lucene80NormsProducer$3.longValue()
TermGroupSelector.advanceTo(int)
SecondPassGroupingCollector.collect(int)
TermGroupSelector.advanceTo(int)
Lucene80DocValuesProducer$21.ordValue()
DirectReader$DirectPackedReader20.get(long)
HashMap.containsKey(Object)
HashMap.getNode(int, Object)
GroupReducer.collect(Object, int)
BlockGroupingCollector.collect(int)
TermScorer.score()
LeafSimScorer.score(int, float)
FieldComparator$RelevanceComparator.compareBottom(int)
ScoreCachingWrappingScorer.score()
TermScorer.score()
Lucene84PostingsReader$BlockDocsEnum.nextDoc()
Lucene84PostingsReader$BlockDocsEnum.refillDocs()
ForDeltaUtil.decodeAndPrefixSum(DataInput, long, long)
ForUtil.decodeAndPrefixSum(int, DataInput, long, long)
SimpleCollector.getLeafCollector(LeafReaderContext)
FirstPassGroupingCollector.doSetNextReader(LeafReaderContext)
TermGroupSelector.setNextReader(LeafReaderContext)
Lucene80DocValuesProducer$BaseSortedDocValues.lookupTerm(BytesRef)
Lucene80DocValuesProducer$TermsDict.seekCeil(BytesRef)
SortedSetDocValuesFacetCounts.<init>(SortedSetDocValuesReaderState)
SortedSetDocValuesFacetCounts.<init>(SortedSetDocValuesReaderState, FacetsCollector)
SortedSetDocValuesFacetCounts.countAll()
SortedSetDocValuesFacetCounts.countOneSegment(OrdinalMap, LeafReader, int, FacetsCollector$MatchingDocs)
SingletonSortedSetDocValues.nextDoc()
Lucene80DocValuesProducer$21.ordValue()
DirectReader$DirectPackedReader12.get(long)
DirectReader$DirectPackedReader4.get(long)
FastTaxonomyFacetCounts.<init>(String, TaxonomyReader, FacetsConfig, FacetsCollector)
FastTaxonomyFacetCounts.count(List)
Lucene80DocValuesProducer$15.binaryValue()
DirectMonotonicReader.get(long)
FacetsCollector.search(IndexSearcher, Query, int, Collector)
FacetsCollector.doSearch(IndexSearcher, ScoreDoc, Query, int, Sort, boolean, Collector)
IndexSearcher.search(Query, Collector)
IndexSearcher.search(List, Weight, Collector)
BulkScorer.score(LeafCollector, Bits)
Weight$DefaultBulkScorer.score(LeafCollector, Bits, int, int)
Weight$DefaultBulkScorer.scoreAll(LeafCollector, DocIdSetIterator, TwoPhaseIterator, Bits)
MultiCollector$MultiLeafCollector.collect(int)
TopScoreDocCollector$SimpleTopScoreDocCollector$1.collect(int)
FilterScorable.score()
ScoreCachingWrappingScorer.score()
TermScorer.score()
LeafSimScorer.score(int, float)
IndexSearcher.search(Query, int, Sort)
IndexSearcher.searchAfter(FieldDoc, Query, int, Sort, boolean)
IndexSearcher.search(Query, CollectorManager)
IndexSearcher.search(Query, Collector)
IndexSearcher.search(List, Weight, Collector)
BulkScorer.score(LeafCollector, Bits)
Weight$DefaultBulkScorer.score(LeafCollector, Bits, int, int)
Weight$DefaultBulkScorer.scoreAll(LeafCollector, DocIdSetIterator, TwoPhaseIterator, Bits)
TopFieldCollector$SimpleFieldCollector$1.collect(int)
TopFieldCollector$TopFieldLeafCollector.thresholdCheck(int)
FieldComparator$TermOrdValComparator.compareBottom(int)
FieldComparator$TermOrdValComparator.getOrdForDoc(int)
Lucene80DocValuesProducer$21.ordValue()
RespellTask.go(IndexState)
DirectSpellChecker.suggestSimilar(Term, int, IndexReader, SuggestMode)
DirectSpellChecker.suggestSimilar(Term, int, IndexReader, SuggestMode, float)
DirectSpellChecker.suggestSimilar(Term, int, IndexReader, int, int, float, CharsRefBuilder)
FuzzyTermsEnum.next()
MultiTermsEnum.next()
MultiTermsEnum.pushTop()
IntersectTermsEnum.next()
IntersectTermsEnum._next()
PKLookupTask.go(IndexState)
SearchPerfTest.main(String)
SearchPerfTest._main(String)

This points to a fundamental rule of performance; it depends on input. Different kinds of Lucene queries will end up in different query paths and have vastly different performance characteristic. These nightly benchmarks is very good at predicting the speed of e.g. certain phrase queries, but they can't predict why your Kibana instance is slow.

Memory

Memory allocations in these nightly benchmarks is very low, at only about 115 MB/s. The worst I have seen from Lucene is somewhere around 3GB/s. This was obviously with very different queries.

By looking at the allocation breakdown, we can see one way to allocate about half the amount of memory:

Total
TaskThreads$TaskThread.run()
SearchTask.go(IndexState)
IndexSearcher.search(Query, Collector)
IndexSearcher.search(List, Weight, Collector)
BulkScorer.score(LeafCollector, Bits)
Weight$DefaultBulkScorer.score(LeafCollector, Bits, int, int)
Weight$DefaultBulkScorer.scoreAll(LeafCollector, DocIdSetIterator, TwoPhaseIterator, Bits)
SecondPassGroupingCollector.collect(int)
TermGroupSelector.advanceTo(int)
Integer.valueOf(int)
Weight.bulkScorer(LeafReaderContext)
TermQuery$TermWeight.scorer(LeafReaderContext)
SegmentTermsEnum.postings(PostingsEnum, int)
Lucene84PostingsReader.postings(FieldInfo, BlockTermState, PostingsEnum, int)
Lucene84PostingsReader$BlockDocsEnum.<init>(Lucene84PostingsReader, FieldInfo)
SimpleCollector.getLeafCollector(LeafReaderContext)
IndexSearcher.createWeight(Query, ScoreMode, float)
TermQuery.createWeight(IndexSearcher, ScoreMode, float)
TermStates.build(IndexReaderContext, Term, boolean)
TermStates.loadTermsEnum(LeafReaderContext, Term)
SegmentTermsEnum.seekExact(BytesRef)
SegmentTermsEnum.pushFrame(FST$Arc, BytesRef, int)
SegmentTermsEnum.getFrame(int)
SegmentTermsEnumFrame.<init>(SegmentTermsEnum, int)
IndexSearcher.search(Query, int)
IndexSearcher.searchAfter(ScoreDoc, Query, int)
IndexSearcher.search(Query, CollectorManager)
IndexSearcher.search(Query, Collector)
IndexSearcher.search(List, Weight, Collector)
BulkScorer.score(LeafCollector, Bits)
Weight$DefaultBulkScorer.score(LeafCollector, Bits, int, int)
Weight$DefaultBulkScorer.scoreAll(LeafCollector, DocIdSetIterator, TwoPhaseIterator, Bits)
ImpactsDISI.nextDoc()
ImpactsDISI.advance(int)
ImpactsDISI.advanceTarget(int)
MaxScoreCache.getMaxScoreForLevel(int)
ExactPhraseMatcher$1$1.getImpacts(int)
ArrayList.add(Object)
ArrayList.add(Object, Object, int)
ArrayList.grow()
ArrayList.grow(int)
Arrays.copyOf(Object, int)
ExactPhraseMatcher$1$SubIterator.<init>(ExactPhraseMatcher$1, List)
AbstractList.iterator()
MaxScoreCache.getSkipUpTo(float)
MaxScoreCache.getSkipLevel(Impacts, float)
MaxScoreCache.getMaxScoreForLevel(int)
ExactPhraseMatcher$1.getImpacts()
ExactPhraseMatcher$1$1.getImpacts(int)
BlockMaxConjunctionScorer$2.nextDoc()
BlockMaxConjunctionScorer$2.advance(int)
BlockMaxConjunctionScorer$2.doNext(int)
WANDScorer$1.advance(int)
WANDScorer.moveToNextCandidate(int)
WANDScorer.updateMaxScoresIfNecessary(int)
WANDScorer.updateMaxScores(int)
DisiPriorityQueue.iterator()
AbstractList$SubList.iterator()
AbstractList.listIterator()
AbstractList$SubList.listIterator(int)
AbstractList$SubList$1.<init>(AbstractList$SubList, int)
AbstractList.listIterator(int)
AbstractList.subList(int, int)
Arrays.asList(Object)
TermScorer.getMaxScore(int)
ImpactsDISI.getMaxScore(int)
MaxScoreCache.getMaxScoreForLevel(int)
MaxScoreCache.computeMaxScore(List)
AbstractList.iterator()
BlockMaxConjunctionScorer$2.advanceTarget(int)
BlockMaxConjunctionScorer$2.moveToNextBlock(int)
BlockMaxConjunctionScorer.getMaxScore(int)
TermScorer.getMaxScore(int)
ImpactsDISI.getMaxScore(int)
MaxScoreCache.getMaxScoreForLevel(int)
MaxScoreCache.computeMaxScore(List)
AbstractList.iterator()
WANDScorer$1.nextDoc()
WANDScorer$1.advance(int)
WANDScorer.moveToNextCandidate(int)
WANDScorer.updateMaxScoresIfNecessary(int)
WANDScorer.updateMaxScores(int)
DisiPriorityQueue.iterator()
AbstractList$SubList.iterator()
AbstractList.listIterator()
AbstractList$SubList.listIterator(int)
WANDScorer$2.matches()
WANDScorer.advanceTail()
WANDScorer.advanceTail(DisiWrapper)
ImpactsDISI.advance(int)
ImpactsDISI.advanceTarget(int)
MaxScoreCache.getMaxScoreForLevel(int)
MaxScoreCache.computeMaxScore(List)
AbstractList.iterator()
BooleanWeight.bulkScorer(LeafReaderContext)
Weight.bulkScorer(LeafReaderContext)
BooleanWeight.scorer(LeafReaderContext)
BooleanWeight.scorerSupplier(LeafReaderContext)
Weight.scorerSupplier(LeafReaderContext)
TermQuery$TermWeight.scorer(LeafReaderContext)
SegmentTermsEnum.impacts(int)
Lucene84PostingsReader.impacts(FieldInfo, BlockTermState, int)
Lucene84PostingsReader.postings(FieldInfo, BlockTermState, PostingsEnum, int)
Lucene84PostingsReader$BlockDocsEnum.<init>(Lucene84PostingsReader, FieldInfo)
Lucene84PostingsReader$BlockImpactsDocsEnum.<init>(Lucene84PostingsReader, FieldInfo, Lucene84PostingsFormat$IntBlockTermState)
TermQuery$TermWeight.getTermsEnum(LeafReaderContext)
FieldReader.iterator()
SegmentTermsEnum.<init>(FieldReader)
BooleanWeight.scorerSupplier(LeafReaderContext)
Weight.scorerSupplier(LeafReaderContext)
TermQuery$TermWeight.scorer(LeafReaderContext)
SegmentTermsEnum.impacts(int)
Lucene84PostingsReader.impacts(FieldInfo, BlockTermState, int)
Lucene84PostingsReader$BlockImpactsDocsEnum.<init>(Lucene84PostingsReader, FieldInfo, Lucene84PostingsFormat$IntBlockTermState)
MultiTermQueryConstantScoreWrapper$1.bulkScorer(LeafReaderContext)
MultiTermQueryConstantScoreWrapper$1.rewrite(LeafReaderContext)
DocIdSetBuilder.add(DocIdSetIterator)
DocIdSetBuilder.grow(int)
DocIdSetBuilder.upgradeToBitSet()
FixedBitSet.<init>(int)
DocIdSetBuilder.ensureBufferCapacity(int)
DocIdSetBuilder.addBuffer(int)
DocIdSetBuilder$Buffer.<init>(int)
IntersectTermsEnum.next()
IntersectTermsEnum._next()
IntersectTermsEnum.pushFrame(int)
FST.findTargetArc(int, FST$Arc, FST$Arc, FST$BytesReader)
Weight.bulkScorer(LeafReaderContext)
PointRangeQuery$1.scorer(LeafReaderContext)
PointRangeQuery$1$4.get(long)
BKDReader.intersect(PointValues$IntersectVisitor)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
BKDReader.intersect(BKDReader$IntersectState, byte, byte)
FixedBitSet.<init>(int)
PhraseWeight.scorer(LeafReaderContext)
PhraseQuery$1.getPhraseMatcher(LeafReaderContext, Similarity$SimScorer, boolean)
SegmentTermsEnum.impacts(int)
Lucene84PostingsReader.impacts(FieldInfo, BlockTermState, int)
Lucene84PostingsReader$BlockImpactsPostingsEnum.<init>(Lucene84PostingsReader, FieldInfo, Lucene84PostingsFormat$IntBlockTermState)
IntervalQuery$IntervalWeight.scorer(LeafReaderContext)
FilteredIntervalsSource.intervals(String, LeafReaderContext)
ConjunctionIntervalsSource.intervals(String, LeafReaderContext)
TermIntervalsSource.intervals(String, LeafReaderContext)
IndexSearcher.createWeight(Query, ScoreMode, float)
BooleanQuery.createWeight(IndexSearcher, ScoreMode, float)
BooleanWeight.<init>(BooleanQuery, IndexSearcher, ScoreMode, float)
IndexSearcher.createWeight(Query, ScoreMode, float)
TermQuery.createWeight(IndexSearcher, ScoreMode, float)
TermStates.build(IndexReaderContext, Term, boolean)
TermStates.loadTermsEnum(LeafReaderContext, Term)
SegmentTermsEnum.seekExact(BytesRef)
SegmentTermsEnum.pushFrame(FST$Arc, BytesRef, int)
SegmentTermsEnum.getFrame(int)
SegmentTermsEnumFrame.<init>(SegmentTermsEnum, int)
BooleanQuery.createWeight(IndexSearcher, ScoreMode, float)
BooleanWeight.<init>(BooleanQuery, IndexSearcher, ScoreMode, float)
IndexSearcher.createWeight(Query, ScoreMode, float)
TermQuery.createWeight(IndexSearcher, ScoreMode, float)
TermStates.build(IndexReaderContext, Term, boolean)
TermStates.loadTermsEnum(LeafReaderContext, Term)
SegmentTermsEnum.seekExact(BytesRef)
PhraseQuery.createWeight(IndexSearcher, ScoreMode, float)
PhraseQuery$1.<init>(PhraseQuery, Query, String, IndexSearcher, ScoreMode, float)
PhraseWeight.<init>(Query, String, IndexSearcher, ScoreMode)
PhraseQuery$1.getStats(IndexSearcher)
TermStates.build(IndexReaderContext, Term, boolean)
TermStates.loadTermsEnum(LeafReaderContext, Term)
IndexSearcher.rewrite(Query)
MultiTermQuery.rewrite(IndexReader)
TopTermsRewrite.rewrite(IndexReader, MultiTermQuery)
TermCollectingRewrite.collectTerms(IndexReader, MultiTermQuery, TermCollectingRewrite$TermCollector)
FuzzyTermsEnum.next()
IntersectTermsEnum.next()
IntersectTermsEnum._next()
IntersectTermsEnum.pushFrame(int)
FST.findTargetArc(int, FST$Arc, FST$Arc, FST$BytesReader)
FST.readArcByDirectAddressing(FST$Arc, FST$BytesReader, int)
FST.readArcByDirectAddressing(FST$Arc, FST$BytesReader, int, int)
FST.readArc(FST$Arc, FST$BytesReader)
Outputs.readFinalOutput(DataInput)
ByteSequenceOutputs.read(DataInput)
ByteSequenceOutputs.read(DataInput)
MultiTermQuery$RewriteMethod.getTermsEnum(MultiTermQuery, Terms, AttributeSource)
FuzzyQuery.getTermsEnum(Terms, AttributeSource)
FuzzyTermsEnum.<init>(Terms, AttributeSource, Term, int, int, boolean)
FuzzyTermsEnum.<init>(Terms, AttributeSource, Term, Supplier)
IndexSearcher.search(Query, int, Sort)
IndexSearcher.searchAfter(FieldDoc, Query, int, Sort, boolean)
IndexSearcher.search(Query, CollectorManager)
IndexSearcher.search(Query, Collector)
IndexSearcher.search(List, Weight, Collector)
BulkScorer.score(LeafCollector, Bits)
Weight$DefaultBulkScorer.score(LeafCollector, Bits, int, int)
Weight$DefaultBulkScorer.scoreAll(LeafCollector, DocIdSetIterator, TwoPhaseIterator, Bits)
TopFieldCollector$SimpleFieldCollector$1.collect(int)
TopFieldCollector$TopFieldLeafCollector.collectCompetitiveHit(int)
LongComparator$LongLeafComparator.setBottom(int)
NumericComparator$NumericLeafComparator.setBottom(int)
NumericComparator$NumericLeafComparator.updateCompetitiveIterator()
BKDReader.estimatePointCount(PointValues$IntersectVisitor)
BKDReader.getIntersectState(PointValues$IntersectVisitor)
TopFieldCollector$TopFieldLeafCollector.setScorer(Scorable)
NumericComparator$NumericLeafComparator.setScorer(Scorable)
NumericComparator$NumericLeafComparator.updateCompetitiveIterator()
Weight.bulkScorer(LeafReaderContext)
TermQuery$TermWeight.scorer(LeafReaderContext)
TermQuery$TermWeight.getTermsEnum(LeafReaderContext)
TermStates.get(LeafReaderContext)
TermStates.loadTermsEnum(LeafReaderContext, Term)
SegmentTermsEnum.seekExact(BytesRef)
FacetsCollector.search(IndexSearcher, Query, int, Collector)
FacetsCollector.doSearch(IndexSearcher, ScoreDoc, Query, int, Sort, boolean, Collector)
IndexSearcher.search(Query, Collector)
IndexSearcher.search(List, Weight, Collector)
BulkScorer.score(LeafCollector, Bits)
Weight$DefaultBulkScorer.score(LeafCollector, Bits, int, int)
Weight$DefaultBulkScorer.scoreAll(LeafCollector, DocIdSetIterator, TwoPhaseIterator, Bits)
MultiCollector$MultiLeafCollector.collect(int)
FacetsCollector.collect(int)
DocIdSetBuilder.grow(int)
DocIdSetBuilder.upgradeToBitSet()
FixedBitSet.<init>(int)
DocIdSetBuilder.ensureBufferCapacity(int)
DocIdSetBuilder.addBuffer(int)
DocIdSetBuilder$Buffer.<init>(int)
RespellTask.go(IndexState)
DirectSpellChecker.suggestSimilar(Term, int, IndexReader, SuggestMode)
DirectSpellChecker.suggestSimilar(Term, int, IndexReader, SuggestMode, float)
DirectSpellChecker.suggestSimilar(Term, int, IndexReader, int, int, float, CharsRefBuilder)
FuzzyTermsEnum.next()
MultiTermsEnum.next()
MultiTermsEnum.pushTop()
IntersectTermsEnum.next()
IntersectTermsEnum._next()
IntersectTermsEnum.pushFrame(int)
FST.findTargetArc(int, FST$Arc, FST$Arc, FST$BytesReader)
FST.readArcByDirectAddressing(FST$Arc, FST$BytesReader, int)
FST.readArcByDirectAddressing(FST$Arc, FST$BytesReader, int, int)
FST.readArc(FST$Arc, FST$BytesReader)
FuzzyTermsEnum.<init>(Terms, Term, int, int, boolean)
FuzzyTermsEnum.<init>(Terms, AttributeSource, Term, Supplier)
FuzzyTermsEnum.bottomChanged(BytesRef)
FuzzyTermsEnum.getAutomatonEnum(int, BytesRef)
MultiTerms.intersect(CompiledAutomaton, BytesRef)
PKLookupTask.go(IndexState)
SegmentTermsEnum.seekExact(BytesRef)
FST.findTargetArc(int, FST$Arc, FST$Arc, FST$BytesReader)
FST.readArcByDirectAddressing(FST$Arc, FST$BytesReader, int)
FST.readArcByDirectAddressing(FST$Arc, FST$BytesReader, int, int)
FST.readArc(FST$Arc, FST$BytesReader)
Outputs.readFinalOutput(DataInput)
ByteSequenceOutputs.read(DataInput)
ByteSequenceOutputs.read(DataInput)
SearchPerfTest.main(String)
SearchPerfTest._main(String)
TaskParser.<init>(IndexState, QueryParser, String, int, Random, String, boolean)
VectorDictionary.<init>(String)
VectorDictionary.parseLine(String)
String.split(String)
String.split(String, int)
String.substring(int, int)
StringLatin1.newString(byte, int, int)
Float.parseFloat(String)
FloatingDecimal.parseFloat(String)
FloatingDecimal.readJavaFormatString(String)
SearchTask.printResults(PrintStream, IndexState)
IndexSearcher.doc(int)
IndexReader.document(int)
BaseCompositeReader.document(int, StoredFieldVisitor)
CodecReader.document(int, StoredFieldVisitor)
CompressingStoredFieldsReader.visitDocument(int, StoredFieldVisitor)
CompressingStoredFieldsReader.document(int)
CompressingStoredFieldsReader$BlockState.document(int)
LZ4WithPresetDictCompressionMode$LZ4WithPresetDictDecompressor.decompress(DataInput, int, int, int, BytesRef)
ArrayUtil.grow(byte, int)
ArrayUtil.growExact(byte, int)

The TermGroupSelector does some kind of grouping of terms, which are identified as an integer:

private final Map<Integer, Integer> ordsToGroupIds = new HashMap<>();

Now, when you insert a int into a regular Java collection it is casted to a Integer object, since the Collection does not deal with primitives. This cast is what's causing 50% of the memory allocations. There are libraries that provide collections for primitives, e.g. fastutil. Using something like that could help. [7]

Thoughts

This has mostly been me spelunking into code that I don't work with. Reading code is great for learning how to build systems - maybe looking at many profile results is good for learning how to work with performance?

As mentioned, I have worked with Lucene before. The profiling data from there looked nothing like profiling data in the benchmark; it allocated way more memory. Both documents and queries were different.

Choosing what to include in a benchmark specifies what kind of performance that your project cares about. A reasonable approach to do this is to guess what kinds of queries are the most common in real-world usage; I believe that this is what Lucene does. But it also means that fringe use-cases are not always covered. If you have problems with Lucene, Elasticsearch, or for that matter _any_ program, I highly encourage you to profile it.

Finally, I would like to thank everyone who has contributed to Lucene. It's a great library.

Footnotes

If you are more curious, there are more complete Blunders views of both the indexing profile and the search profile. If you want to use Blunders for your application, please get in touch.

References

  1. Apache Lucene
  2. Lucene nightly benchmarks
  3. Java Flight recorder
  4. LUCENE-9004
  5. Boosting Java Performance (with Math.fma)
  6. JDK-8201516
  7. fastutil