Posted by

ITEM TILE – File Size: 4.4K

Efficient cache for gigabytes of data written in Go.

BigCache Build Status Coverage Status GoDoc Go Report Card

Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance.BigCache keeps entries on heap but omits GC for them. To achieve that operations on bytes arrays take place,therefore entries (de)serialization in front of the cache will be needed in most use cases.

Requires Go 1.12 or newer.


Simple initialization

“`goimport “”

cache, _ := bigcache.NewBigCache(bigcache.DefaultConfig(10 * time.Minute))

cache.Set(“my-unique-key”, []byte(“value”))

entry, _ := cache.Get(“my-unique-key”)fmt.Println(string(entry))“`

Custom initialization

When cache load can be predicted in advance then it is better to use custom initialization because additional memoryallocation can be avoided in that way.

“`goimport ( “log”



config := bigcache.Config { // number of shards (must be a power of 2) Shards: 1024,

    // time after which entry can be evicted    LifeWindow: 10 * time.Minute,    // Interval between removing expired entries (clean up).    // If set to <= 0 then no action is performed.    // Setting to < 1 second is counterproductive — bigcache has a one second resolution.    CleanWindow: 5 * time.Minute,    // rps * lifeWindow, used only in initial memory allocation    MaxEntriesInWindow: 1000 * 10 * 60,    // max entry size in bytes, used only in initial memory allocation    MaxEntrySize: 500,    // prints information about additional memory allocation    Verbose: true,    // cache will not allocate more memory than this limit, value in MB    // if value is reached then the oldest entries can be overridden for the new ones    // 0 value means no size limit    HardMaxCacheSize: 8192,    // callback fired when the oldest entry is removed because of its expiration time or no space left    // for the new entry, or because delete was called. A bitmask representing the reason will be returned.    // Default value is nil which means no callback and it prevents from unwrapping the oldest entry.    OnRemove: nil,    // OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left    // for the new entry, or because delete was called. A constant representing the reason will be passed through.    // Default value is nil which means no callback and it prevents from unwrapping the oldest entry.    // Ignored if OnRemove is specified.    OnRemoveWithReason: nil,}

cache, initErr := bigcache.NewBigCache(config)if initErr != nil { log.Fatal(initErr)}

cache.Set(“my-unique-key”, []byte(“value”))

if entry, err := cache.Get(“my-unique-key”); err == nil { fmt.Println(string(entry))}“`

LifeWindow & CleanWindow

  1. LifeWindow is a time. After that time, an entry can be called dead but not deleted.

  2. CleanWindow is a time. After that time, all the dead entries will be deleted, but not the entries that still have life.


Three caches were compared: bigcache, freecache and map.Benchmark tests were made using ani7-6700K CPU @ 4.00GHz with 32GB of RAM on Ubuntu 18.04 LTS (5.2.12-050212-generic).

Writes and reads

“`bashgo versiongo version go1.13 linux/amd64

cd cachesbench; go test -bench=. -benchmem -benchtime=4s ./… -timeout 30mgoos: linuxgoarch: amd64pkg: 12999889 376 ns/op 199 B/op 3 allocs/opBenchmarkConcurrentMapSet-8 4355726 1275 ns/op 337 B/op 8 allocs/opBenchmarkFreeCacheSet-8 11068976 703 ns/op 328 B/op 2 allocs/opBenchmarkBigCacheSet-8 10183717 478 ns/op 304 B/op 2 allocs/opBenchmarkMapGet-8 16536015 324 ns/op 23 B/op 1 allocs/opBenchmarkConcurrentMapGet-8 13165708 401 ns/op 24 B/op 2 allocs/opBenchmarkFreeCacheGet-8 10137682 690 ns/op 136 B/op 2 allocs/opBenchmarkBigCacheGet-8 11423854 450 ns/op 152 B/op 4 allocs/opBenchmarkBigCacheSetParallel-8 34233472 148 ns/op 317 B/op 3 allocs/opBenchmarkFreeCacheSetParallel-8 34222654 268 ns/op 350 B/op 3 allocs/opBenchmarkConcurrentMapSetParallel-8 19635688 240 ns/op 200 B/op 6 allocs/opBenchmarkBigCacheGetParallel-8 60547064 86.1 ns/op 152 B/op 4 allocs/opBenchmarkFreeCacheGetParallel-8 50701280 147 ns/op 136 B/op 3 allocs/opBenchmarkConcurrentMapGetParallel-8 27353288 175 ns/op 24 B/op 2 allocs/opPASSok 256.257s“`

Writes and reads in bigcache are faster than in freecache.Writes to map are the slowest.

GC pause time

“`bashgo versiongo version go1.13 linux/amd64

cd cachesbench; go run cachesgcoverheadcomparison.go

Number of entries: 20000000GC pause for bigcache: 1.506077msGC pause for freecache: 5.594416msGC pause for map: 9.347015ms“`

“`go versiongo version go1.13 linux/arm64

cd cachesbench; go run cachesgcoverheadcomparison.goNumber of entries: 20000000GC pause for bigcache: 22.382827msGC pause for freecache: 41.264651msGC pause for map: 72.236853ms“`

Test shows how long are the GC pauses for caches filled with 20mln of entries.Bigcache and freecache have very similar GC pause time.

Memory usage

You may encounter system memory reporting what appears to be an exponential increase, however this is expected behaviour. Go runtime allocates memory in chunks or ‘spans’ and will inform the OS when they are no longer required by changing their state to ‘idle’. The ‘spans’ will remain part of the process resource usage until the OS needs to repurpose the address. Further reading available here.

How it works

BigCache relies on optimization presented in 1.5 version of Go (issue-9477).This optimization states that if map without pointers in keys and values is used then GC will omit its content.Therefore BigCache uses map[uint64]uint32 where keys are hashed and values are offsets of entries.

Entries are kept in bytes array, to omit GC again.Bytes array size can grow to gigabytes without impact on performancebecause GC will only see single pointer to it.


BigCache does not handle collisions. When new item is inserted and it’s hash collides with previously stored item, new item overwrites previously stored value.

Bigcache vs Freecache

Both caches provide the same core features but they reduce GC overhead in different ways.Bigcache relies on map[uint64]uint32, freecache implements its own mapping built onslices to reduce number of pointers.

Results from benchmark tests are presented above.One of the advantage of bigcache over freecache is that you don’t need to knowthe size of the cache in advance, because when bigcache is full,it can allocate additional memory for new entries instead ofoverwriting existing ones as freecache does currently.However hard max size in bigcache also can be set, check HardMaxCacheSize.

HTTP Server

This package also includes an easily deployable HTTP implementation of BigCache, which can be found in the server package.


Bigcache genesis is described in blog post: writing a very fast cache service in Go


BigCache is released under the Apache 2.0 license (see LICENSE)

To restore the repository download the bundle


and run:

 git clone allegro-bigcache_-_2019-11-21_09-41-58.bundle 

Uploader: allegro
Upload date: 2019-11-21