Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

MYSQL+HANDLERSOCKET=NOSQL

http://www.percona.com/live/london-2012/sessions/mysqlhandlersocketnosql
by

Sergey Xek

on 3 December 2012

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of MYSQL+HANDLERSOCKET=NOSQL

Has lots of cool stuff Handler ♥♡ = and NoSQL ;-) SQL is cool SELECT sr.receiving_id, sc.collection_id FROM stock_collection as sc, stock_requisation as srq, stock_receiving as sr WHERE (sc.stock_id = '" . strStockID . "' AND sc.datemm_issued = '" . strMM . "' AND sc.qty_issued >= 0 AND sc.collection_id = srq.requisition_id AND srq.active_status = 'Active') OR (sr.stock_id = '" . strStockID . "' AND sr.datemm_received = '" . strMM . "' AND sr.qty_received >= 0) Aggregate functions
Subqueries
JOINs
Complicated WHERE conditions
Transactions
... But for a general website most queries are simple SELECT `email` FROM `users` WHERE `id` = 1 memcached ... H is MySQL plug-in with direct access to InnoDB/XtraDB Opens its own socket Own protocol, own commands No SQL support.
None at all. Pros: Cons: faster
batch-processes requests
compact protocol
works okay with 10K connections
doesn't disable SQL access
can be used with replication
bundled with Percona Server So there is no need for memcached extra memory
no duplicate data it's now consistent InnoDB/XtraDB only, partly supports others
no transactions/stored procedures
some data types/core MySQL features are not supported
no commercial support
immature
blocks table modifications, locking Not a key-value/document store Not a ‘binary SQL’ server Not a tool for complex queries Not a tool for creating/modifying tables Not for hosting/shared usage H Inside Read cycle Write cycle Handlersocket-MySQL
interoperability you read consistent data via HS and via SQL
HS works okay with MySQL replication
auto_increment is supported
current builds invalidate query cache
MySQL users & permissions are not used in HS
table locking via HS & via SQL will conflict BTW, it's a plugin, so this should work
(as seen on the internets): install plugin handlersocket soname 'handlersocket.so';

uninstall plugin handlersocket; In practice usually hangs entire DB. So, be careful with:
‘LOCK TABLES ... WRITE’
‘ALTER TABLE ...’ Using Handlersocket You get ports open: 9998, 9999 read only Workflow: Client initiates connection
Client sends request
Server sends response
Client sends next request
... You can send N requests in bulk, you will receive N responses in the order of requests. longest chapter... Protocol: text-like binary
requests and responses are one line
each line ends with \n (0x0A)
each line consists of a set of tokens separated by \t (0x09)
Token is either NULL or an encoded string
NULL is encoded as \0 (0x00) Strings: Empty string is zero-length token
Every byte in the range 0x00-0x0F is prefixed by 0x01 and shifted by 0x40. (eg. 0x03 -> 0x01 0x43)
Other bytes are left unmodified Example command: 0 \t 3 \t \0 \t f o o \t \n \t\t or \t\n means there's an empty string in between <tab> 0 3 NULL foo ( ) <tab> <tab> <tab> Complicated... What ... NOT Sometimes that buggy: (1 request 1 response) empty
string Error responses: 2 0 1 1 open_table error type, always >= 1 error text Authentication: A <type> <key> <type>: must be 1 If an authentication is enabled for a listener, any other requests on a connection fail before an authentication request succeeded on the connection. Authentication: 0 1 Ok response: Commands 3 1 unauth Fail response: Opening index: P <index_id> <db> <table> <index> <columns> [<fcolumns>] <index_id>: decimal number of your choice Opening index: P 1 test store PRIMARY id,box fruit SELECT id,box FROM test.store WHERE id=? AND fruit=? something like prepared statement Insert: <index_id> + <vlen> <v1> ... <vn> <index_id>: opened index id Opening index: 0 1 Ok response: 1 1 open_table
2 1 open_table
... Fail responses: Opening index: P <index_id> <db> <table> <index> <columns> [<fcolumns>] You can re-open an index with the same <index_id> and possibly other <db>/<table>/<index>. You can open the same combination of <db>, <table>, <index> multiple times, possibly with different <columns>. You can't manually close indexes. Index is open until the client connection is closed. Open indices consume memory and make things work slower. Try to use less than 1000 indices. For efficiency, keep <index_id> small as far as possible. * — optional Insert: 0 1 Ok response: 0 1 <last_insert_id> Ok response for tables with AUTO_INCREMENT field: Insert: P 89 test hs4 PRIMARY warehouse,box,fruit,count
0 1

89 + 4 New York A1 melon 4
0 1 1

89 + 4 New York A2 melon 4
0 1 2 Example: Always provide data for all columns you specified with <columns> param. (Will be discussed later) Get: <index_id> <op> <vlen> <v1> ... <vn> [LIM] [IN] [FILTER] <index_id>: opened index id LIMIT clause: * — optional
** — can be used multiple times <limit> <offset> Same logic as in SQL.
When omitted, it works as if 1 and 0 were specified.
Applied after FILTER. Get: <index_id> <op> <vlen> <v1> ... <vn> [LIM] [IN] [FILTER] IN clause: @ <icol> <ivlen> <iv1> ... <ivn> <icol>: zero-based number of the column in <index> which IN is applied to.
<ivlen>: count of <iv1> ... <ivn>
<iv1> ... <ivn>: values to look for

The <icol>-th parameter value of <v1> ... <vn> is ignored. Get: <index_id> <op> <vlen> <v1> ... <vn> [LIM] [IN] [FILTER] FILTER clause: <ftyp> <fop> <fcol> <fval> <ftyp>: F (skip inappropriate rows) or W (end at first inappropriate row)
<op>: one of =, !=, <, <=, >, >=
<fcol>: zero-based number of the column in <fcolumns> in opened index
<fval>: value Get: <index_id> <op> <vlen> <v1> ... <vn> [LIM] [IN] [FILTER] Multiple filters can be specified, and work as the logical AND of them. Get: Empty result: Non-empty result (2 rows): <columns> count <columns> count 0 2 0 2 row1val1 row1val2 row2val1 row2val2 Get: P 89 test hs2 PRIMARY warehouse,box,fruit,count
0 1

89 = 1 3
0 4 Virginia A1 grapes 5 Example: Retrieve 1 row by exact id 3 (1-col index) Get: P 89 test hs2 PRIMARY warehouse,box,fruit,count
0 1

89 >= 1 2 3 0
0 4 Seattle B1 banana 4 Virginia A1 grapes 5 Virginia B2 watermelon 1 Example: Retrieve 3 rows starting from id 2 (1-col index) <key>: value of handlersocket_plain_secret (handlersocket_plain_secret_rw for port 9999) config option <db>,<table>,<index>: db, table, index names. To open the primary key use PRIMARY as <index>. <fcolumns>*: comma-separated list of column names you'll use for filtering <columns>: comma-separated list of column names you'll retrieve/insert/set <vlen>: count of <v1> ... <vn>. Must be <= to the count of <columns> in opened index. <v1> ... <vn>: field values in order of <columns>. For other columns in table the default values for each column are set. FILTER*,**: FILTER clause <op>: operation — one of =, <, <=, >, >= <vlen>: count of <v1> ... <vn>. Must be <= to the count of columns forming <index> in opened index. <v1> ... <vn>: column values to look for in <index>. LIM*: OFFSET-LIMIT clause IN*: IN clause Delete: <index_id> <op> <vlen> <v1> ... <vn> LIM [IN] [FILTER] D All the same Required Update/delete: <index_id> <op> <vlen> <v1> ... <vn> LIM [IN] [FILTER] MOD All the same Required Update/delete: P 89 test hsmdemo3 PRIMARY count count
0 1

89 > 1 0 1000 0 F > 0 3 D
0 1 5 Example: Delete rows with id > 0 & count > 3 num rows deleted Delete: 0 1 0 Ok response, no rows deleted: 0 1 5 Ok response, 5 rows deleted: LIM FILTER last_insert_id last_insert_id <columns> count note: column count is 4 P <index_id> <db> <table> <index> <columns> [<fcolumns>] MOD clause: <mop> <m1> ... <mn> <mop>: U, U? (update), D, D? (delete), +, +? (increment), -, -? (decrement). Operations with '?' return row values before modifying them. Update/delete: <index_id> <op> <vlen> <v1> ... <vn> LIM [IN] [FILTER] MOD <index_id> <op> <vlen> <v1> ... <vn> LIM [IN] [FILTER] D? All the same Required Regular delete: Get-and-delete: <m1> ... <mn>: field values in order of <columns>. Can be <= number of columns in <columns>. Other columns are left intact then. Must be numeric for '+', '-' operations. Ignored for 'D', 'D?'. Update/delete: P 90 test hsmdemo3 PRIMARY count
0 1

90 = 1 8 1 0 +? 10
0 1 6 Example: Get count where id=8 & set count=count+10 where id=8 count LIM MOD ‘Peculiarities’ That was hard, right? XtraBackup also won't work. Supported data types: You can read all data types via HS While writing, overflowing data is cut the same way as in SQL You can't write TIMESTAMP fields Charsets: If you work with UTF8 only — don't worry at all. Just comply with HS protocol encoding standard. HS reads & writes fields in the charset of the field. Have UTF8 & CP1251 in the same table? HS works with bytes as-is. BLOBS are binary — you read what you write w/o any charset applied. Sorting (collations): Collations affect >, >=, <, <= operations (but not filters) and define the order you get results in Use http://www.collation-charts.org/mysql60/ In MySQL indices are ordered according to the collations of the fields they consist of Default values: broken for BINARY, ENUM, TIMESTAMP data types NULL as a default value won't work. You'll get zeroed/empty values instead Out-of-charset bytes are replaced with ‘?’ ascii chars on insert/update. HS reads records in the order of index ‘ON UPDATE CURRENT_TIMESTAMP’ is not supported P <index_id> <db> <table> <index> <columns> [<fcolumns>] Only for fields not specified in <columns> in opened index! On insertion omitted fields get their default values. always provide all values for fields in <columns> for insert request Use cases @ Banned email lookup table Use case 1: We replaced SELECT * FROM ... WHERE name='...' AND domain='...' with HS get One table, one server per DC. Master-master replicated to the same table in other DC. ~52 million rows
~5 Gb Dual-core Intel(R) Xeon(R) CPU E5503 @ 2.00GHz Banned email lookup table Use case 1: 60% CPU used, LA = 0.5 Inserts/updates go via SQL, <10 RPS Selects go via HS
~1000 RPS, 3 ms per read time in milliseconds Persistent session store Use case 2: Key-value store: select/insert/update row by key via HS One table, 1 server, ~16 m rows, ~23 Gb 12-core Intel(R) Xeon(R) CPU X5650 @ 2.67GHz Persistent session store Use case 2: 8% CPU used, LA = 5 Create: <10 RPS, ~1.2ms/request All data fits into memory.
Persistent connections to HS. Periodical purge (delete) via SQL All data fits into memory.
Persistent connections to HS. Originally was slower. Moving from MySQL/InnoDB to Percona Server/XtraDB gave us ~ 4x more performance. Update: ~180 RPS, ~1.3ms/request Get: ~3500 RPS, ~0.5 ms/request Sharded persistent session store Use case 3: Now 10 000 tables in 100 databases, 1 MySQL, 1 server ~10 m rows, ~20 Gb 12-core Intel(R) Xeon(R) CPU X5650 @ 2.67GHz Sharded persistent session store Use case 3: 8% CPU used, LA = 5 Create: <10 RPS, ~1.3ms/request Sharded by randomly generated hash All data fits into memory.
Persistent connections to HS. Update: ~150 RPS, ~1.3ms/request Get: ~1200 RPS, ~1.6 ms/request Persistent cache Use case 4: Originally we used memcached, cache reinitialization was looong...
We made our own persistent cache with MySQL/HS. 32 million rows, 14 Gb,
sharded across 10 000 tables, 1 server per DC 12-core Intel(R) Xeon(R) CPU X5650 @ 2.67GHz Persistent cache Use case 4: 11% CPU used, LA=5 Create: <10 RPS, ~0.4ms/request All data fits into memory. Persistent connections to HS. Update: <10 RPS, ~0.4ms/request Get: ~14500 RPS at peak, ~0.5 ms/request RPS Avg req time in ms So what was the benefit of sharding? Single-table setup worked well, but had low max write RPS. One table was single hot point. Second problem: when a table grows, performance drops. Clean obsolete rows daily. Try sharding and compare with single-table setup for your application. Key-value operations only: get/set/modify row by key. RPS Avg req time in ms Tuning Try to shard by key (for datasets > 10m rows) Use Percona Server/XtraDB Use persistent connections In HS protocol requests and responses have no unique id, so you can't securely match requests and responses

Get key, value where key = '...', check if result key matches requested key

Reopen connection on syntax error, I/O error

Prevent passing socket with your data to the next request InnoDB ROW_FORMAT

InnoDB KEY_BLOCK_SIZE

HASH indices

Merging indices What else you can play with: There's one problem with persistent connections: Final part HS client libraries https://github.com/DeNADev/HandlerSocket-Plugin-for-MySQL/blob/master/README HS source code https://github.com/DeNADev/HandlerSocket-Plugin-for-MySQL/ HS docs https://github.com/DeNADev/HandlerSocket-Plugin-for-MySQL/tree/master/docs-en http://yoshinorimatsunobu.blogspot.ru/2010/10/using-mysql-as-nosql-story-for.html http://www.percona.com/doc/percona-server/5.5/performance/handlersocket.html Further reading What was cut out These slides: http://bit.ly/handlersocket For those who survived An article from authors Percona's HS page http://www.slideshare.net/akirahiguchi/handlersocket-20100629en-5698215 Must-see presentation from HS author Authentication request
More examples on using HS commands
HS alternatives, their pros & cons
Configuring HS
Lazy index opening technique
HS history
Lots of intermediate research logs Mind map with more info: http://bit.ly/handlersocket Why should I use this instead of ‘normal’ NoSQL DB like MongoDB or Redis? FAQ If you plan on using persistent connections, adapt existing library to solve the problems with pconnects I mentioned or consider writing your own library. 1. You are still able to work with the same data via SQL — for those who can't abandon MySQL entirely. Next code iteration will inherit your open socket. Should I use existing client library or should I write my own? 2. If you're able to fit your needs in simple HS command set you'll get some core ‘old good SQL DB’ benefits — ACID, good multi-core scaling, efficient data storage. @ryba_xek s@averin.ru Thanks! Questions? unclear docs work logic & protocol change from time to time w/o any warning and also... once for a bunch of reqs. once for a bunch of reqs. broken! LIM SQL — HS analogues SELECT a,b,c FROM ... Open index with <columns>=a,b,c ... LIMIT 1 OFFSET 0 LIMIT clause id BETWEEN 1 AND 2 GET by index >= 1 with W-type FILTER while id <= 2 WHERE a < 1 AND b > 2 F-type FILTER a < 1, F-type FILTER b > 2 a IN (...) IN clause SELECT ... FOR UPDATE, UPDATE ... update with '?' operations Slides, code, mindmap: http://bit.ly/pluk2012 broken!
Full transcript