Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Memcached Doesn't Work the Way You Think It Does

Slides for the 2013-03-19 Geekfest Presentation
by

Philip Corliss

on 19 March 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Memcached Doesn't Work the Way You Think It Does

Memcached Doesn't Work the Way You Think It Does Phil Corliss
pcorliss@gmail.com
@pcorliss Memcached Slab Class Pages Chunks A Problem Arises Hey Birdcage is down... Currently handling 1200rpm
Usually handles 4rpm
Requests are coming from a single group
That should only happen if the cache is down A Seamless Segue... How Memcached Works From the Client Perspective Offers a distributed pool of memory across multiple servers. Clients typically take a list of memcached servers, apply a hashing algorithm to the key and choose a server to store the value on.

Imagine a distributed and shared volatile pool of memory. Accessible via a key-value store interface. This presentation is not about the client-server interaction How I Thought Memcached Worked def set(key, val)
evict_least_recently_used if free_space < val.size
@data[key] = val
end "Memcached's APIs provide a giant hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in least recently used (LRU) order."
http://en.wikipedia.org/wiki/Memcached


"Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering."

"At heart it is a simple Key/Value store."
https://code.google.com/p/memcached/wiki/NewOverview Memcached Made Simple Memcached On Start assigned a certain amount of memory
memcached -d -m 256 #256MB
Each MB assigned corresponds to one page [pcorliss@web-memcache1-uat ~]$ memcached -vvv
slab class 1: chunk size 96 perslab 10922
slab class 2: chunk size 120 perslab 8738
slab class 3: chunk size 152 perslab 6898
slab class 4: chunk size 192 perslab 5461
...
slab class 39: chunk size 493552 perslab 2
slab class 40: chunk size 616944 perslab 1
slab class 41: chunk size 771184 perslab 1
slab class 42: chunk size 1048576 perslab 1 Slab Class Collection of 1 MB pages with a predefined block size
Only data of the specified size will be placed in a slab
For instance data between 28K and 32K will always be stored in slab class 26
Pagess are allocated to slab classes if available Pages 1Mb by default
Once allocated can not be reallocated Chunks Chunks and data are 1:1
Chunk size is defined by the slab class
Unused data is wasted, this can be mitigated by having lots of slabs Questions? Phil Corliss
pcorliss@gmail.com
@pcorliss

prezi.com/user/pcorliss/ The Results Median (red) 450ms drop in response time or 47%
95th percentile (green) 680ms drop or 34% Super Short Term - I'm a busy guy with places to be
The object size was 28K, we added 5K of junk text and our service came back to life.

Short Term
Raise cache limits & Restart Memcached

Medium Term
Track Evictions via delta, alert on anything over > 1

Long Term
Twemcache - https://github.com/twitter/twemcache How to Fix It Back to our problem... What Was Actually Happening All pages allocated
The slab our data was assigned to had zero free blocks
On a write, a block was evicted (our configuration)
Deceptive Graphs - 12K evictions/host/min
Deceptive Graphs - bytes used has nothing to do with pages allocated >> cache.write("key", 'a'*29*1024)
>> 100.times do |i|
?> result = cache.read("key", :raw => true)
>> break if result.nil?
>> puts "#{i} seconds"
>> sleep 1
>> end >> cache.write("key", 'foo')
>> 100.times do |i|
?> result = cache.read("key", :raw => true)
>> break if result.nil?
>> puts "#{i} seconds"
>> sleep 1
>> end 0 seconds
1 seconds
...
12 seconds
13 seconds
=> nil 0 seconds
1 seconds
....
100 seconds
=> nil Bonus Slides I have so much time left Eviction Twemcache Memcached Eviction isn't exactly LRU eviction Saving our Bacon "If there are no free chunks, and no free pages in the appropriate slab class, memcached will look at the end of the LRU for an item to 'reclaim'. It will search the last few items in the tail for one which has already been expired, and is thus free for reuse. If it cannot find an expired item however, it will 'evict' one which has not yet expired. This is then noted in several statistical counters."
https://code.google.com/p/memcached/wiki/NewUserInternals Least Recently Used (LRU)
Operates at the slab level
Can potentially evict unexpired data when some data has expired
We can get around this by using Twemcache Random eviction (2) - evict all items from a randomly chosen slab.
Slab LRA eviction (4) - choose the least recently accessed slab, and evict all items from it to reuse the slab.
Slab LRC eviction (8) - choose the least recently created slab, and evict all items from it to reuse the slab. Eviction ignores freeq & lruq to make sure the eviction follows the timestamp closely. Recommended if cache is updated on the write path. Eviction Options that allow us to reclaim allocated pages Also Twemproxy - https://github.com/twitter/twemproxy Resources
https://code.google.com/p/memcached/wiki/NewUserInternals
http://work.tinou.com/2011/04/memcached-for-dummies.html
https://github.com/twitter/twemcache
https://github.com/twitter/twemproxy Memcached Tips 1Mb limit by default, for performance reasons your cached elements should be much much less than this.
Watch out for "super keys"
Monitor the delta for memcached stats (stats, stats slabs)
Monitor unallocated pages
Use a client that implements a hash ring or other fail over strategy
More hosts means more fault tolerance and less degradation
Don't store sessions in memcached <-- Controversy
Multi-get, use it
Lots of other products in the eco-system, memcached is still great, but not for everything
Full transcript