Wednesday, 30 September 2009

MySQL processlist phrase book

MySQL processlist phrase book: "

For your convenience, a short phrase book, starting with explanation of process states where MySQL is mostly working to look up data from tables:



  • “Sending data” – reading data from tables (or looking it up)

  • “Copying to tmp table” – reading data from tables (or looking it up)

  • “Copying to tmp table on disk” – query needs a rewrite

  • “statistics” – looking up data from tables

  • “Sorting result” – reading data from tables (or looking it up)


Locking is fun:



  • “update” – There is big chance that it is waiting for a lock, can be updating as well, on busy systems

  • “Locked” – Even bigger chance that it is MyISAM table waiting to be converted to InnoDB


Table opening is even funnier:



  • “Opening tables” – some other idiot thread is holding the LOCK_open or similar mutex

  • “Closing tables” – some other idiot thread is holding the LOCK_open or similar mutex

  • “Opening table” – the idiot thread (or not that smart DBA who thinks that table_cache should be set to current value)

  • “Flushing tables” – someone decided that flushing tables is good idea, but forgot to kill long running queries

  • “Waiting for tables” – same as “Flushing tables”


If you have replication:



  • “Connecting to server” – could not connect to server, waiting to retry

  • “Reconnecting after a failed master event read” – master and slave have same @server_id

  • “Registering slave on master” – master and slave have same @server_id

  • “Waiting to reconnect after a failed master event read” – master and slave have same @server_id


Few more tricky ones:



  • “Sleep” – last millisecond it was probably not sleeping, due to how process scheduling and kernel level buffering works probably it has finished right before kernel switched to the ‘SHOW PROCESSLIST’ thread.

  • “Writing to net” – usually, not writing to net (client has blocked) – actual writing to net is usually nearly non-blocking operation, as long as client doesn’t sleep.

  • “Killed” – actually, someone tried to kill it, but it doesn’t care, as it doesn’t check for thd->killed flag


And in the user column:



  • “unauthenticated user” – are you sure your DNS is working right?

  • “system user” – it isn’t user, these are (hopefully plural) replication threads, that don’t have any “user” credentials

  • “root” – either that is you running the PROCESSLIST, another session you forgot you are running or you forgot to create separate user for your application


I probably didn’t write quite a few important ones, but some of them are self-evident (such as “init” and “end”), and others probably will never show up :)


PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Monday, 28 September 2009

Understanding Innodb Transaction Isolation

Understanding Innodb Transaction Isolation: "

The MySQL Innodb storage engine provides ACID compliance, the ‘I’ being isolation. There are four states defined in MySQL with the tx_isolation system variable, READ-UNCOMMITTED, READ-COMMITTED, REPEATABLE-READ and SERIALIZABLE.


Generally MySQL installations do not modify the default value of tx_isolation = REPEATABLE-READ, however I have seen with a number of clients the default value has been changed to READ-COMMITTED. When I attempt to identify the reason why, I have always received the same reason. Oracle uses a default transaction isolation of READ-COMMITTED. See how Oracle Manages Data Concurrency and Consistency for more information.


However, while the literal string is the same, the actual implementation of READ-COMMITTED in Oracle more closely represents the REPEATABLE-READ in MySQL.


The following demonstrates what you can expect to see between the operation of these two modes.












































REPEATABLE-READ
Session 1Session 2


DROP TABLE IF EXISTS transaction_test;
CREATE TABLE transaction_test(
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
val VARCHAR(20) NOT NULL,
created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY(id)
) ENGINE=InnoDB DEFAULT CHARSET latin1;

INSERT INTO transaction_test(val) VALUES ('a'),('b'),('c');



SELECT @@global.tx_isolation, @@session.tx_isolation;
+-----------------------+------------------------+
| @@global.tx_isolation | @@session.tx_isolation |
+-----------------------+------------------------+
| REPEATABLE-READ | REPEATABLE-READ |
+-----------------------+------------------------+
START TRANSACTION;
SELECT * FROM transaction_test;
+----+-----+---------------------+
| id | val | created |
+----+-----+---------------------+
| 1 | a | 2009-09-21 00:19:43 |
| 2 | b | 2009-09-21 00:19:43 |
| 3 | c | 2009-09-21 00:19:43 |
+----+-----+---------------------+
SELECT SLEEP(20);



START TRANSACTION;
INSERT INTO transaction_test(val) VALUES ('x'),('y'),('z');
SELECT * FROM transaction_test;
+----+-----------------+---------------------+
| id | val | created |
+----+-----------------+---------------------+
| 1 | a | 2009-09-21 00:19:43 |
| 2 | b | 2009-09-21 00:19:43 |
| 3 | c | 2009-09-21 00:19:43 |
| 4 | x | 2009-09-21 00:21:00 |
| 5 | y | 2009-09-21 00:21:00 |
| 6 | z | 2009-09-21 00:21:00 |
+----+-----------------+---------------------+
COMMIT;



INSERT INTO transaction_test(val) VALUES (@@session.tx_isolation);
SELECT * FROM transaction_test;
+----+-----------------+---------------------+
| id | val | created |
+----+-----------------+---------------------+
| 1 | a | 2009-09-21 00:19:43 |
| 2 | b | 2009-09-21 00:19:43 |
| 3 | c | 2009-09-21 00:19:43 |
| 7 | REPEATABLE-READ | 2009-09-21 00:21:01 |
+----+-----------------+---------------------+

COMMIT;


READ-COMMITTED


SET SESSION tx_isolation='READ-COMMITTED';
TRUNCATE TABLE transaction_test;
INSERT INTO transaction_test(val) VALUES ('a'),('b'),('c');



SELECT @@global.tx_isolation, @@session.tx_isolation;
+-----------------------+------------------------+
| @@global.tx_isolation | @@session.tx_isolation |
+-----------------------+------------------------+
| REPEATABLE-READ | READ-COMMITTED |
+-----------------------+------------------------+
START TRANSACTION;
SELECT * FROM transaction_test;
+----+-----+---------------------+
| id | val | created |
+----+-----+---------------------+
| 1 | a | 2009-09-23 22:49:44 |
| 2 | b | 2009-09-23 22:49:44 |
| 3 | c | 2009-09-23 22:49:44 |
+----+-----+---------------------+
SELECT SLEEP(20);



START TRANSACTION;
INSERT INTO transaction_test(val) VALUES ('x'),('y'),('z');
SELECT * FROM transaction_test;
+----+-----+---------------------+
| id | val | created |
+----+-----+---------------------+
| 1 | a | 2009-09-23 22:49:44 |
| 2 | b | 2009-09-23 22:49:44 |
| 3 | c | 2009-09-23 22:49:44 |
| 4 | x | 2009-09-23 22:52:38 |
| 5 | y | 2009-09-23 22:52:38 |
| 6 | z | 2009-09-23 22:52:38 |
+----+-----+---------------------+
COMMIT;



INSERT INTO transaction_test(val) VALUES (@@session.tx_isolation);
SELECT * FROM transaction_test;
+----+----------------+---------------------+
| id | val | created |
+----+----------------+---------------------+
| 1 | a | 2009-09-23 22:49:44 |
| 2 | b | 2009-09-23 22:49:44 |
| 3 | c | 2009-09-23 22:49:44 |
| 4 | x | 2009-09-23 22:52:38 |
| 5 | y | 2009-09-23 22:52:38 |
| 6 | z | 2009-09-23 22:52:38 |
| 7 | READ-COMMITTED | 2009-09-23 22:56:10 |
+----+----------------+---------------------+

COMMIT;



As you can see, under READ-COMMITTED your result set can change during the transaction. However, how practical is this example in an actual application.


In what circumstances would you consider using READ-COMMITTED? Is there an improvement in locking contention that can lead to less deadlock contention? What is the overhead in other areas?


Harrison writes in My Favorite New Feature of MySQL 5.1: Less InnoDB Locking that best locking out of InnoDB in 5.1 will be with READ-COMMITTED. Note that as mentioned, the impact is a change in replication mode that may have a more dramatic effect.


Heikki Tuuri comments in Understanding InnoDB MVCC that using READ-COMMITTED should help in a specific locking issue.


I am still unclear of the specific benefits in general terms for all environments. Review of the The InnoDB Transaction Model and Locking and specifically Consistent Nonlocking Reads provides “With READ COMMITTED isolation level, each consistent read within a transaction sets and reads its own fresh snapshot” which indicates that for certain workloads the reduced locking is a benefit.


Every environment is different and ultimately the actual transaction statements will determine what options and benefits work best.


PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Wednesday, 23 September 2009

Fun with mysqlslap benchmarking

Fun with mysqlslap benchmarking: "Mysqlslap is a good benchmarking tool but can be much more versatile in my opinion. So, recently, I was benchmarking a problem that was, not hard to solve, but somewhat tricky to benchmark. Using mysqlslap was the fastest way to get the answers I wanted with the least amount of BS, or so I thought!



Lets take a look:



-- The problem

1. A MySQL instance with 5 databases

2. Only one of the databases looked to be the problem from system analysis

3. Light concurrency issues suspected, given that the tables were MyISAM and the information gathered from the customer



-- How to run a close to real world benchmark

Turning on the general query log is a must in this situation so I gathered 1 hour worth of transactions. Please keep in mind that you have to take a snapshot of the data before you enable the general query log for obvious reasons.



Now I have a snapshot of the data then 1 hour of general query logging thereafter. Originally I thought that I would be able to feed the general query log into mysqlslap and be done with it. This, however, was not the way of it.



-- Creating the files

I knew from the system analysis that there was one database of the 5 that was the root of the issue. From there I started to grep through the general log to find all of the queries that had table names from the one database. Unfortunately, many of the table names were duplicated throughout all of the databases on the server.



I wrote a quick in-line script to get the connection ids for all the connections on the database I was interested in, easy enough!



# NOTE: depending on your version of MySQL

## This example is for mysql 5.4


Shell> for x in var=`cat mysql.log |grep Connect |awk '{print $2}' |cut -d: -f3`; do echo ${var:2:50} >> out.txt; done



I now have all of the connection ids for the database I want to benchmark. From here I need to parse through the mysql.log (the general log) to get the queries that I would like to run with mysqlslap. Here goes:



Shell> for x in `cat ./out.txt`; do cat mysql.log | awk -v x="$x" '{ if ( $1 == x ) { printf("%s ;\n",$0,x) } }' $1; done |awk -FQuery '{print $2 “ ;”}' |grep –v “INSERT” >> myqueries.txt



Are there other ways to do it, sure, this is just one and not bad for a few minutes of work.



-- The benchmark

With the snapshot in place and mysql running we can now make a “close to real world” benchmark.



Shell> mysqlslap --concurrency=5 \

--iterations=5 --query=myqueries.txt \

--delimiter=';'



-- Repeat the benchmark with a different configuration or my.cnf tweak.

1. rebuild mysql with the snapshot taken

2. tweak mysql

3. run mysqlslap
PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Be nice to the query cache

Be nice to the query cache: "The query cache has had an interesting history with MySQL. I don't have much experience with it. I might want to use it in the future so I enabled it during a run of sysbench readonly on an 8-core server. The results are fascinating. Performance with the query cache enabled is bad for MySQL 5.0.44, good for MySQL 5.0.84 and then bad again for MySQL 5.1.38. By bad I mean that throughput does not increase with concurrency as it does when the query cache is disabled.

This lists the throughput in the number of transactions per second reported by sysbench readonly using an 8-core server. It was run for 1, 2, 4, 8, 16 and 32 concurrent sessions.

Throughput with query cache enabled:
1 2 4 8 16 32
5.0.44 663 746 872 834 730 694
5.0.84 639 1033 1819 3216 2636 2466
5.1.38 662 725 818 782 731 699

Throughput with query cache disabled:
1 2 4 8 16 32
5.0.44 856 1429 2807 4431 4327 4199
5.0.84 790 1466 2781 4393 4255 4172
5.1.38 747 1387 2698 4385 4293 4152


Configuration for the test with the query cache enabled:

innodb_buffer_pool_size=2000M
innodb_log_file_size=100M
innodb_doublewrite=0
innodb_flush_method=O_DIRECT
innodb_thread_concurrency=8
max_connections=500
innodb_max_dirty_pages_pct=80
innodb_flush_log_at_trx_commit=2
query_cache_type=1
query_cache_size=100000000


Configuration for the test with the query cache disabled:

innodb_buffer_pool_size=2000M
innodb_log_file_size=100M
innodb_doublewrite=0
innodb_flush_method=O_DIRECT
innodb_thread_concurrency=8
max_connections=500
innodb_max_dirty_pages_pct=80
innodb_flush_log_at_trx_commit=2


The sysbench command line:

for nt in 1 2 4 8 16 32; do
../sysbench --test=oltp --oltp-table-size=2000000 --max-time=180 --max-requests=0 \
--mysql-table-engine=innodb --db-ps-mode=disable --mysql-engine-trx=yes \
--oltp-read-only --num-threads=$nt run
done


There appears to be much more mutex contention in the query cache for 5.1.38 (CPU time is lower, idle time is higher). This lists typical vmstat output when sysbench is run with 8 concurrent sessions and the query cache is enabled:

# output for MySQL 5.0.84
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
6 0 17732 32097788 1417064 29284560 0 0 0 53 1026 213912 71 12 18 0 0

# output for MySQL 5.1.38
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
5 0 17732 32120104 1417076 29285028 0 0 0 117 1020 102607 23 6 71 0 0

PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Monday, 21 September 2009

First USB 3.0 product gets certified, floodgates get closer to breaking

First USB 3.0 product gets certified, floodgates get closer to breaking: "


You heard it here first, folks: SuperSpeed USB is a reality. After waiting around for what feels like ages, USB 3.0 can now say it has its first certified product in NEC's xHCI host controller. We know this may not be the most exciting first product to get the all-important seal of approval, but you've got to start somewhere. And hey, for those who know a thing or two about building a product to spec, having a legitimate host controller could sure come in handy. We spoke to Jeff Ravencraft, president and chairman of the USB-IF (the group that oversees certification and the like), and he stated that (officially) the group is still anticipating end products to hit shelves in early 2010. That said, this here host controller is available now in the open market, and he did confess that a number of manufacturers would likely try to get USB 3.0-equipped wares onto store shelves before that magical day in December.

He also noted that a smattering of companies would be showcasing USB 3.0 gear at upcoming trade shows, with a Buffalo external hard drive, an ExpressCard-to-USB 3.0 adapter and even a full-on laptop with a functioning USB 3.0 port making a stop at IDF later this week. Aside from the aforesaid ExpressCard adapter -- which will let existing lappies enjoy the spoils of SuperSpeed USB -- he also noted that a PCI card would be available for desktop users looking to add a few sockets to their rig. We were also informed that USB 3.0 receptor ports will play nice with USB 2.0 cables and gadgets, albeit at USB 2.0 speed; additionally, USB 3.0 wares will be able to connect via older USB 2.0 sockets, though again at a slower rate. We've got to say, the dual-backwards compatibility is pretty sweet.

We also asked him about cabling and the general market transition to USB 3.0, and he did note that USB 3.0 cables will be physically different inside. In other words, a USB 2.0 cable cannot carry data at USB 3.0 speeds, end of story. To enjoy USB 3.0, you'll need a USB 3.0 receptor port, a USB 3.0 cable and a USB 3.0 product; if you swap any of those pieces with USB 2.0, everything slows to USB 2.0 rates. Jeff also noted that the USB-IF would be coordinating the release of end products and cables, and he even said that older USB protocols may actually never completely fade from the market. After all, wireless keyboards would never need USB 3.0 speeds unless some sort of display were integrated, and particularly in cheaper products, using an older USB chipset could help keep costs down and MSRPs lower. So yeah, USB 3.0 is officially here, but the public can't play with any USB 3.0 gear just yet. But if our hunches are correct, we'd bet on at least one or two big players pushing out SuperSpeed-packin' wares before the year ends.

Filed under:

First USB 3.0 product gets certified, floodgates get closer to breaking originally appeared on Engadget on Mon, 21 Sep 2009 09:00:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments"

A working "progress bar" for a huge ALTER TABLE

A working "progress bar" for a huge ALTER TABLE: "My friend Gabriel came up with a working 'progress bar' for ALTER TABLE. Until MySQL / Drizzle can do this 'natively', this is a pretty neat trick:

http://gabrielcain.com/blog/2009/08/05/mysql-alter-table-and-how-to-observe-progress/
PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Saturday, 19 September 2009

How do you determine the InnoDB plugin version?

How do you determine the InnoDB plugin version?: "This works:

mysql> show variables like "innodb_version"\G
*************************** 1. row ***************************
Variable_name: innodb_version
Value: 1.0.4


This does not because it always reports version 1.0 for PLUG_VERSION, at least until InnoDB plugin 2.0 arrives.

mysql> select * from plugins where PLUGIN_NAME="InnoDB"\G
*************************** 1. row ***************************
PLUGIN_NAME: InnoDB
PLUGIN_VERSION: 1.0
PLUGIN_STATUS: ACTIVE
PLUGIN_TYPE: STORAGE ENGINE
PLUGIN_TYPE_VERSION: 50138.0
PLUGIN_LIBRARY: NULL
PLUGIN_LIBRARY_VERSION: NULL
PLUGIN_AUTHOR: Innobase Oy
PLUGIN_DESCRIPTION: Supports transactions, row-level locking, and foreign keys
PLUGIN_LICENSE: GPL

PlanetMySQL Voting:
Vote UP /
Vote DOWN"

How to generate per-database traffic statistics using mk-query-digest

How to generate per-database traffic statistics using mk-query-digest: "

We often encounter customers who have partitioned their applications among a number of databases within the same instance of MySQL (think application service providers who have a separate database per customer organization … or wordpress-mu type of apps). For example, take the following single MySQL instance with multiple (identical) databases:



SHOW DATABASES;
+----------+
| Database |
+----------+
| db1 |
| db2 |
| db3 |
| db4 |
| mysql |
+----------+

Separating the data in this manner is a great setup for being able to scale by simply migrating a subset of the databases to a different physical host when the existing host begins to get overloaded. But MySQL doesn’t allow us to examine statistics on a per-database basis.


Enter Maatkit.


There is an often-ignored gem in Maatkit’s mk-query-digest, and that is the –group-by argument. This can be used to aggregate information by tables, hosts, users, or databases (full documentation is available via perldoc).



%> perl mk-query-digest --limit 100% --group-by db slow.log
...
# Rank Query ID Response time Calls R/Call Item
# ==== ======== ============= ===== ====== ====
# 1 0x 6000 60.0% 6000 0.5124 db3
# 2 0x 2000 20.0% 2000 0.0112 db1
# 3 0x 1500 15.0% 1500 0.1665 db2
# 4 0x 500 5.0% 500 0.0022 db4

So here, we can see that the majority (60%, to be exact) of execution time is spent in db3. If the server is reaching it’s capacity and the next most useful performance optimization is to migrate a database to a different server, you know exactly which database to move (db3) and how much room that will give you on the original host (60% growth) and on the new host (40% growth), which may have a direct bearing on your hardware selection.


Let Baron know how awesome you think this is by getting him a gift from his Amazon Wish List!




Entry posted by Ryan Lowe |
No comment


Add to: delicious | digg | reddit | netscape | Google Bookmarks


PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Monitoring MySQL – The error log

Monitoring MySQL – The error log: "

It is important that you monitor the MySQL error log. There are a few different options available for defining the details of the log. If not specified the default is [datadir]/[hostname].err. This is not an ideal location for 2 reasons.


First, a correctly configured MySQL datadir will have permissions only for the mysql user, and this is generally restrictive access to the user only and no group or world permissions. Other users/groups should have limited access to the mysql error log.


Second, the datadir is for data, not logs, especially logs that can potentially fill file systems. I am referring here to more then just the error log.


I would recommend you create a separate directory for MySQL logs such as the error, slow and general logs. An example I implement for single installation environments using Linux mysql packages is:



mkdir /var/log/mysql
chown mysql:mysql /var/log/mysql
chmod 750 /var/log/mysql

There does not seem to be a consensus over whether to include the hostname or not in the error log filename. My preference is to not include. I would rather the filename to be consistent across multiple servers. The argument is what about when consolidating logs from multiple servers. I discount this because you have to connect to the server to retrieve logs, create a sub directory of that hostname for consolidated logs.


With Linux distributions you may not find log files where you expect. Ubuntu packages for example has the log going to syslog. While the theory is to make system logging and monitoring easier, it makes MySQL specific monitoring more difficult. You also suffer a logrotate problem where you may only have 7 days of log. I prefer to have access to all historical MySQL log information.


The best choice is to define the error log, in both the [mysqld_safe] and [mysqld] section of your servers my.cnf



[mysqld_safe]
log-error=/var/log/mysql/error.log

[mysql]
log-error=/var/log/mysql/error.log

In MySQL 5.1 you have the luxury of different output sources, FILE, TABLE or BOTH for the general log and the slow log with –log-output. No option exists for the error log.


Other my.cnf options to be aware of include:



  • log-warnings | skip-log-warnings

  • syslog | skip-syslog

There is generally also lacking in the standard monitoring products/plugins that present MySQL status information. In my monitoring MySQL solutions I provide a line count of the MySQL error log, so that a delta can be easily detected and then reviewed more proactively.


One issue with a recent client is the lack of access to the physical box by different parties and therefore the lack of access to the log. The identification that something needs to be viewed, then the ability to be able to view is an important problem to be solved.


References


Some other references for MySQL error log monitoring.



PlanetMySQL Voting:
Vote UP /
Vote DOWN"

MySql Webiars - On Demand

http://www.mysql.com/news-and-events/on-demand-webinars/

SystemTap – DTrace for Linux ?

SystemTap – DTrace for Linux ?: "

Since DTrace was released for Solaris I am missing it on Linux systems... It can't be included in Linux by the same reason why ZFS can't be - it's licensing issue. Both ZFS and DTrace are under CDDL, which is incompatible with GPL. So you can see DTrace and ZFS on Solaris, FreeBSD, MacOS, but not on Linux.


However I follow the project SystemTap for couple of years (it was started in 2005), which is supposed to provide similar to DTrace functionality.


Why I am interested in this tool, because there is no simple way under Linux to profile not CPU-bound load (for CPU-bound there is OProfile, see for example

http://mysqlinsights.blogspot.com/2009/08/oprofile-for-io-bound-apps.html). I.e. for IO-bound or for mutex contention problems OProfile is not that useful.


SystemTap is included in RedHat 5 releases, but I was not able to get it running even in CentOS 5.3 (it crashed and hung every so often). Latest updated RedHat 5.4 promised some more fixes to SystemTap, so I decided to give it more try as soon as I got RedHat 5.4 on hands.


Surprising, but now it runs much more stable. I was able to get profiling of kernel and system calls.

Here is simple script to show IO activity per disk per process (well, it is similar to iotop, but iotop is not available in RedHat / CentOS)


with output like this



CODE:




  1. Mon Sep 14 05:22:14 2009 , Average:20353Kb/sec, Read: 4337Kb, Write: 97428Kb






  2. UID PID PPID CMD DEVICE T BYTES



  3. 27 3701 3651 mysqld dm-0 W 99766272



  4. 27 3701 3651 mysqld dm-0 R 4440064



  5. 0 2324 2296 hald-addon-stor dm-0 R 1242






  6. Mon Sep 14 05:22:19 2009 , Average:21756Kb/sec, Read: 4263Kb, Write: 104521Kb






  7. UID PID PPID CMD DEVICE T BYTES



  8. 27 3701 3651 mysqld dm-0 W 107029504



  9. 27 3701 3651 mysqld dm-0 R 4358144



  10. 0 2883 2879 pam_timestamp_c dm-0 R 6528



  11. 0 2324 2296 hald-addon-stor dm-0 R 828







This example maybe is simple, but the point is that there is rich scripting language with tons

of probes you can intersect ( kernel functions, FS drivers functions, any other drives and modules)


What else I see very useful in SystemTap it can work in userspace. That is you can use it to profile your and any application that has -debuginfo packages ( all -debuginfo for standard RedHat RPMS you can download from RedHat FTP), but basically it is info you get compiling with gcc -g.


Well, there seems another war story going on. To profile userspace application with SystemTap your kernel should be patches with uprobes patch, which fortunately is included in RedHat based kernels, but not included in vanilla kernel yet. So I am not sure if you can get userspace profiling running in another distributives.


There is quite simple script that I tried to hack around MySQL ®



CODE:




  1. probe process('/usr/libexec/mysqld').function('*innobase*').



  2. {



  3. printf('s(%s)\n', probefunc(), $parms)



  4. }







with output which I get running simple SELECT against InnoDB table:



CODE:




  1. stap -v lsprob.stp



  2. Pass 1: parsed user script and 52 library script(s) in 240usr/10sys/261real ms.



  3. Pass 2: analyzed script: 107 probe(s), 22 function(s), 1 embed(s), 0 global(s) in 540usr/20sys/554real ms.



  4. Pass 3: using cached /root/.systemtap/cache/4f/stap_4f8b8738f58ff78e294c62765ac83d91_36925.c



  5. Pass 4: using cached /root/.systemtap/cache/4f/stap_4f8b8738f58ff78e294c62765ac83d91_36925.ko



  6. Pass 5: starting run.



  7. innobase_register_trx_and_stmt(thd=? )



  8. innobase_register_stmt(thd=? )



  9. innobase_map_isolation_level(iso=? )



  10. innobase_release_stat_resources(trx=0x2aaaaaddb8b8 )



  11. convert_search_mode_to_innobase(find_flag=? )



  12. innodb_srv_conc_enter_innodb(trx=? )



  13. srv_conc_enter_innodb(trx=0x2aaaaaddb8b8 )



  14. innodb_srv_conc_exit_innodb(trx=? )



  15. srv_conc_exit_innodb(trx=0x2aaaaaddb8b8 )



  16. innobase_release_temporary_latches(thd=0x1a6aced0 )



  17. innobase_release_stat_resources(trx=? )



  18. srv_conc_force_exit_innodb(trx=0x2aaaaaddb8b8 )







Again, this case is maybe too simple, but basically you can intersect internal MySQL function and script (measure time, count of call, statistics) what you what. I did not figure out yet how to intersect C++ style function (i.e. ha_innobase::index_read), so there is area to investigate.


So I am going to play with it more and do some useful scripting to get profiling of MySQL.


And it seems SystemTap can re-use DTrace probes available in application, as you may know DTrace-probes were added into MySQL 5.4, so interesting how it works.


I should mention that there is second alternative of DTrace... It's .... DTrace port. Looking on blog it seems one-man project and currently author is fighting with resolving userspace issues. I gave to this a try, but on my current RedHat 5.4 after several runs I got 'Kernel panic', so it's enough for now.




Entry posted by Vadim |
2 comments


Add to: delicious | digg | reddit | netscape | Google Bookmarks


PlanetMySQL Voting:
Vote UP /
Vote DOWN"

MySQL Replication 102

MySQL Replication 102: "

One of the most asked questions is how to setup MySQL replication. The MySQL Reference Manual provides a good Replication How To as a starting guide on MySQL Replication 101.


MySQL replication has many uses including read scalability, backups, failover, online maintenance, upgrade testing and verification, software upgrades, alternative data or structure for performance queries and even benchmarking to name the popular uses.


When reviewing an installation of MySQL replication I use the following as part of my checklist of steps used for confirming your replication environment is operational.


Master Variables



  • server-id – Replication will not work without this correctly set and unique

  • log-bin – Pre-requisite for working replication

  • log-bin-index

  • max_binlog_size

  • binlog_cache_size

  • expire_logs_days – a value from 5 to 10 is good, not set can result in a full disk.

  • binlog-do-db/binlog-ignore-db – Use with caution

  • sync_binlog

  • innodb_support_xa


Slave Variables



  • server-id – Replication will not work without this correctly set and unique

  • read_only = TRUE

  • log-bin – may or may not be present

  • relay-log

  • relay-log-index

  • max_binlog_size

  • binlog_cache_size

  • expire_logs_days – a value from 5 to 10 is good, not set can result in a full disk.

  • replicate-do-???? – Warning, use these with caution. Your slave will not be the same as your master.

  • slave-skip-errors – Warning, this can lead to your slave being inconsistent with your slave.

On the Master I audit the following information.



  • SHOW MASTER STATUS

    • If any Binlog_Do_DB, then a SHOW SCHEMAS for verification



  • SHOW MASTER LOGS

    • Confirm physical files as well as available diskspace on log-bin disk partition



  • SHOW SLAVE STATUS (in a true master/slave environment this should be empty)

  • SHOW GLOBAL VARIABLES LIKE ‘binlog_cache_size’;

  • SHOW GLOBAL STATUS LIKE ‘Binlog%’

  • SELECT host,user,password FROM mysql.user WHERE Repl_slave_priv=’Y’ AND Super_priv=’N';

On the Slave I audit the following information.



  • SHOW SLAVE STATUS

  • SHOW MASTER STATUS – This will determine if you have log-bin enabled on the slave


The key information for MySQL slaves is in the SHOW SLAVE STATUS command. An example output is:



mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 10.10.1.1
Master_User: slave
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: bin-log.001817
Read_Master_Log_Pos: 369684547
Relay_Log_File: relay-log.000449
Relay_Log_Pos: 42347742
Relay_Master_Log_File: bin-log.001817
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 369684547
Relay_Log_Space: 42347742
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
1 row in set (0.00 sec)

It is important that you learn and understand these values. In this above case, replication is NOT running as indicated by Slave_IO_Running and Slave_SQL_Running.


This information is just an introduction as to what to look at. In my next lesson, I’ll spend more detail of the output of the various commands, as well as describe in greater detail the relationship of underlying files that are important for a working MySQL Replication environment.


Other References


Verifying MySQL Replication in Action

MySQL Replication Architecture


PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Thursday, 17 September 2009

Using TMPFS for MySQL's tmpdir

Using TMPFS for MySQL's tmpdir: "There have been a lot of systems I have looked at where the Created_tmp_disk_tables was very high. MySQL has good documentation on how it uses memory, specifically temp tables, here.



I would like to talk about not 'why' MySQL does this but how to speed up the performance when MySQL internally or users create temporary tables to disk. A great solution is TMPFS, a quick how to is as follows:



-- Before you start

1. Make sure you allocate enough space to TMPFS

-- 2GB is usually safe but if you are using larger data sets with inefficient queries then there are far worse performance issues to deal with.



-- The safe way to implement TMPFS for MySQL

shell> mkdir /tmp/mysqltmp

shell> chown mysql:mysql /tmp/mysqltmp

shell> id mysql

##NOTE: make sure you get the uid and gid for mysql

shell> vi /etc/fstab

## make sure this in in your fstab

tmpfs /tmp/mysqltmp tmpfs rw,uid=25,gid=26,size=2G,nr_inodes=10k,mode=0700 0 0

shell> mount /tmp/mysqltmp

shell> vi /etc/my.cnf #or the mysql config file for your server

## NOTE: inside the file add the following under [mysqld]

tmpdir=/tmp/mysqltmp/

shell> service mysql restart



-- The not so safe way to implement TMPFS for MySQL

shell> chown mysql:mysql /tmp

shell> id mysql

## NOTE: make sure you get the uid and gid for mysql

shell> vi /etc/fstab

## make sure this in in your fstab

tmpfs /tmp tmpfs rw,uid=25,gid=26,size=2G,nr_inodes=10k,mode=0700 0 0

shell> ##move everyting out of /tmp

shell> mount /tmp



It is possible to run the 'not so safe' implementation but if you can take scheduled downtime the 'safe way' is the best way to go!



The performance benefits for the TMPFS addition are great so try it out and let us know what you think after your implementation.
PlanetMySQL Voting:
Vote UP /
Vote DOWN"

3 ways MySQL uses indexes

3 ways MySQL uses indexes: "

I often see people confuse different ways MySQL can use indexing, getting wrong ideas on what query performance they should expect. There are 3 main ways how MySQL can use the indexes for query execution, which are not mutually exclusive, in fact some queries will use indexes for all 3 purposes listed here.


Using index to find rows The main purpose of the index is to find rows quickly - without scanning whole data set. This is most typical reason index gets added on the first place. Most popular index type in MySQL - BTREE can speed up equality and prefix range matches. So if you have index on (A,B) This index can be used to lookup rows for WHERE clauses like A=5 ; A BETWEEN 5 AND 10 ; A=5 AND B BETWEEN 5 AND 10 it however will NOT be able to help lookup rows for B BETWEEN 5 AND 10 predicate because it is not index prefix. It is important to look at key_len column in explain plan to see how many index parts are actually used for row lookup. Very common problem I see is multi column indexes which are used but only to their short prefix which is not very selective. A lot of this mistakes come from missing one very important MySQL limitation - once MySQL runs into the interval range it will not use any further index parts. If you have A BETWEEN 5 AND 10 AND B=5 for the same index MySQL will use the index... but it will only use A prefix for row lookups and scan whole A BETWEEN 5 AND 10 range. It is interesting to note this limitation only applies to interval ranges - for enumerated ranges MySQL will use both key parts. Hence if you change this predicate to A IN (5,6,7,8,9,10) AND B=5 you will quite likely see improved query performance. Beware however of large nested enumerated ranges they are very hard on the optimizer. This just describes how MySQL uses single index - there are more complex rules of how indexes will be used if you look at multiple indexes usage with 'index merge'


Using Index to Sort Data Another great benefit of BTREE index is - it allows to retrieve data in sorted form hence avoiding external sort process for executing of queries which require sorting. Using index for sorting often comes together with using index to find rows, however it can also be used just for sort for example if you're just using ORDER BY without and where clauses on the table. In such case you would see 'Index' type in explain which correspond to scanning (potentially) complete table in the index order. It is very important to understand in which conditions index can be used to sort data together with restricting amount of rows. Looking at the same index (A,B) things like ORDER BY A ; ORDER BY A,B ; ORDER BY A DESC, B DESC will be able to use full index for sorting (note MySQL may not select to use index for sort if you sort full table without a limit). However ORDER BY B or ORDER BY A, B DESC will not be able to use index because requested order does not line up with the order of data in BTREE. If you have both restriction and sorting things like this would work A=5 ORDER BY B ; A=5 ORDER BY B DESC; A>5 ORDER BY A ; A>5 ORDER BY A,B ; A>5 ORDER BY A DESC which again can be easily visualized as scanning a range in BTREE. Things like this however would not work A>5 ORDER BY B , A>5 ORDER BY A,B DESC or A IN (3,4) ORDER BY B - in these cases getting data in sorting form would require a bit more than simple range scan in the BTREE and MySQL decides to pass it on. There are some workarounds you can use though.


Using index to read data Some storage engines (MyISAM and Innodb included) can also use index to read the data, hence avoiding to read the row data itself. This is not simply savings of having 2 reads per index entry instead of one but it can save IO orders of magnitude in some cases - Indexes are sorted (at least on the page boundary) so doing index range scan you typically get many index entries from the same page but the rows itself can be scattered across many pages requiring potentially a lot of IOs. On top of that if you just need access to couple of columns

index can be simply much smaller than the data which is one of the reason covering indexes help to speed up queries even if data is in memory. If MySQL is only reading index and not accessing rows you will see 'using index' in EXPLAIN output.


These are the main 'core' use for indexes. You can also see others like using index for group by but I think they can be pretty much looked as one of these 3 ways described.




Entry posted by peter |
12 comments


Add to: delicious | digg | reddit | netscape | Google Bookmarks


PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Wednesday, 16 September 2009

Formaldehyde: PHP debug info for the client side

Formaldehyde: PHP debug info for the client side: "


Andrea Giammarchi has released Formaldehyde, a new Ajax and PHP error debugger.


Simply grab the project and throw in an inclusion:



PHP:







  1. <?php require_once 'formaldehyde.php'; ?>









You are off to the races.


Want to do a touch more?



JAVASCRIPT:







  1. // *optional* custom ServerError constructor



  2. function ServerError(e){



  3. for(var key in e)



  4. this[key] = e[key]



  5. ;



  6. };



  7. // make Firebug the best friend ever



  8. (ServerError.prototype = new Error).name = 'ServerError';






  9. // same call



  10. var xhr = new XMLHttpRequest;



  11. xhr.open('get', 'test.php', true);



  12. xhr.onreadystatechange = function(){



  13. if(xhr.readyState === 4){



  14. if(199 <xhr.status && xhr.status <400){



  15. // do something without failures



  16. eval(xhr.responseText);



  17. }






  18. // if Formaldehyde managed the call



  19. else if(xhr.status === 500 && xhr.getResponseHeader('X-Formaldehyde') != null) {






  20. // evaluate safely the response



  21. // generating a proper error



  22. console.log(new ServerError(eval('(' + xhr.responseText + ')')));






  23. } else {



  24. // 404 or other cases



  25. console.log(new Error(xhr.responseText));



  26. }



  27. }



  28. };



  29. xhr.send(null);









We have known about the great FirePHP for quite some time, but this is different as explained. Check it out!




"

Fast PHP clear empty array keys

Fast PHP clear empty array keys.

http://hasin.wordpress.com/2009/09/16/removing-empty-elements-from-an-array-the-php-way/&urlhash=qqrU&trk=news_discuss

InnoDB is dead. Long live InnoDB!

InnoDB is dead. Long live InnoDB!: "

I find myself converting more and more customers’ databases to InnoDB plugin. In one case, it was a last resort: disk space was running out, and plugin’s compression released 75% space; in another, a slow disk made for IO bottlenecks, and plugin’s improvements & compression alleviated the problem; in yet another, I used the above to fight replication lag on a stubborn slave.


In all those case, I needed to justify the move to “new technology”. The questions “Is it GA? Is it stable?” are being asked a lot. Well, just a few days ago the MySQL 5.1 distribution started shipping with InnoDB plugin 1.0.4. That gives some weight to the stability question when facing a doubtful customer.


But I realized that wasn’t the point.


Before InnoDB plugin was first announced, little was going on with InnoDB. There were concerns about the slow/nonexistent progress on this important storage engine, essentially the heart of MySQL. Then the plugin was announced, and everyone went happy.


The point being, since then I only saw (or was exposed to, at least) progress on the plugin. The way I understand it, the plugin is the main (and only?) focus of development. And this is the significant thing to consider: if you’re keeping to “old InnoDB”, fine – but it won’t get you much farther; you’re unlikely to see great performance improvements (will 5.4 make a change? An ongoing improvement to InnoDB?). It may eventually become stale.


Converting to InnoDB plugin means you’re working with the technology at focus. It’s being tested, benchmarked, forked, improved, talked about, explained. I find this to be a major motive.


So, long live InnoDB Plugin! (At least till next year, that is, when we may all find ourselves migrating to PBXT)


PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Monday, 14 September 2009

When does InnoDB do disk IO?

When does InnoDB do disk IO?: "This is a list of most of the sources of disk IO in InnoDB. Most of the statements below are true for official MySQL 5.0 and the InnoDB plugin prior to 1.0.4. Versions 1.0.4+ of the InnoDB plugin and XtraDB have made InnoDB much better for IO bound workloads and eventually I will write more about them.

  • synchronous page read - User threads read pages synchronously on buffer cache misses. They may also generate readahead (prefetch) requests handled by other threads. A page is taken from the free list to cache the contents of the disk page and the page is usually moved to the head of the LRU list.
  • trx_purge - This is called by srv_master_thread to remove rows that have been marked as deleted when the rows are no longer visible to other transactions. The work uses synchronous IO. Dmitri has written about the benefits of using a separate thread to keep up with IO bound delete-intensive workloads.
  • buf_flush_free_margin - This is called from user threads to move pages to the free list from the end of the buffer pool LRU list. Dirty and clean pages are moved. Dirty pages are first written back to the database file(s) by the background write thread(s). This can be the source of a lot of mutex contention.
  • log_preflush_pool_modified_pages - This is called from user threads prior to modifying rows. It causes all dirty pages that are too old (all rather than a fixed number) to be written back to the database file(s). User threads wait for the writes to complete when the old dirty pages are too old. Adaptive checkpoint in the 1.0.4 plugin and XtraDB reduce the intermittent delays caused by this. The pages to write are found from the end of the buffer pool flush list.
  • buf_flush_batch - This is called by srv_master_thread when there are too many dirty pages. It attempts to enforce innodb_max_dirty_pages_pct. Prior to 1.0.4 and XtraDB, it writes 100 dirty pages at a time and otherwise writes up to innodb_io_capacity pages at a time. The pages to write are found from the end of the buffer pool flush list. For write-intensive workloads, enforcement of innodb_max_dirty_pages_pct was unlikely prior to the 1.0.4 plugin and XtraDB.
  • ibuf_contract_for_n_pages - This is called by by srv_master_thread to apply deferred changes from the insert buffer to secondary index pages. It schedules up to 5 reads to be handled by the background read thread(s) and waits for the reads to complete. On read completion, a callback method applies the changes from the insert buffer entries to the pages making the pages dirty in the buffer pool. On write intensive workloads, the insert buffer will get full, consume half of the buffer pool and provide no performance benefit because the insert buffer is not used when full. This is not a good state as it is likely to remain full at that point until the server becomes idle and while full will continue to use half of the pages in the buffer pool.
  • doublewrite buffer - When the InnoDB doublewrite buffer is enabled, dirty pages are first written to the double write buffer using sequential IO to write the N dirty pages with one large IO request. After that write has been forced to disk, the pages are updated in place in the database file(s). While each page is written twice, this does not cost 2 random IOs per page as the write for the doublewrite buffer uses one large sequential IO for many pages.
  • transaction log - InnoDB uses a write-ahead log file to provide ACID semantics.
  • buf_read_ahead_linear, buf_read_ahead_random - These generate readahead (prefetch) requests to be handled by the background read thread(s). The requests are generated by user threads. The SHOW STATUS variables Innodb_buffer_pool_read_ahead_rnd and Innodb_buffer_pool_read_ahead_seq count the number of calls to these functions. As each call can generate 0 or more readahead requests, I am skeptical that these counters are useful in official MySQL.

Note that when InnoDB schedules dirty pages to be written back to the database files, neighbor pages may also be scheduled to be written.

InnoDB maintains 3 lists for the buffer pool. The free list has pages that can be used immediately to cache the result of a database file read. The LRU list stores pages in order of usage. A page is usually moved to the front of the LRU each time it is used. Because of this, a scan of a large table can wipe out the buffer pool. Fortunately, a fix for bug 45015 should fix this in the InnoDB 1.0.5 plugin. The flush list maintains pages in order of the LSN for the oldest change to the page.
PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Partitioning by dates: the quick how-to

Partitioning by dates: the quick how-to: "There is thorough documentation about the Partitioning feature in MySQL 5.1. There are also nice articles like this one by Robin. However, I thought it would be useful to have a quick 'how-to' guide to partitioning by dates. I'll use the world schema to keep it easy.



Partitioning a table by a range of dates is quite popular. Unfortunately, the PARTITION BY RANGE only accepts an integer (or a function that evaluates to an integer) as the partition expression. That's fine if you want to partition by numeric types like this:



ALTER TABLE City PARTITION BY RANGE(id) (

PARTITION p0 VALUES LESS THAN (1000),

PARTITION p1 VALUES LESS THAN (2000),

PARTITION p2 VALUES LESS THAN (3000),

PARTITION p3 VALUES LESS THAN (4000),

PARTITION p4 VALUES LESS THAN (MAXVALUE));





Side node: when partitioning using range, the partitions must be defined using 'VALUES LESS THAN' and the ranges must be listed in increasing order.



If you'd rather partition by a date or datetime, it's not as easy. You cannot use a date directly (e.g., this syntax fails: RANGE(date_column)..PARTITION p0 VALUES LESS THAN ('2000-01-01')). There are plenty of workarounds, but my favorite is the to_days function.



First I need a date column, so let's add one to the City table:

ALTER TABLE City ADD citydate DATE;



# Now populate it with random days in the last ~3 years

UPDATE City SET citydate = current_date - INTERVAL truncate(rand()*1000,0) DAY;



#Remove the old partitioning

ALTER TABLE City REMOVE PARTITIONING;



#Remove the PRIMARY KEY constraint and replace it with an index (I'll explain below)

ALTER TABLE City DROP PRIMARY KEY, ADD INDEX(id);



#Partition by the date col:

ALTER TABLE City PARTITION BY RANGE (to_days(citydate)) (

PARTITION p0 VALUES LESS THAN (to_days('2007-01-01')),

PARTITION p1 VALUES LESS THAN (to_days('2008-01-01')),

PARTITION p2 VALUES LESS THAN (to_days('2009-01-01')),

PARTITION p3 VALUES LESS THAN (to_days('2009-03-01')),

PARTITION p4 VALUES LESS THAN (to_days('2009-06-01')),

PARTITION p5 VALUES LESS THAN MAXVALUE);




Notice the partitions do not have to be even. This is very handy; you may want to put older records that aren't accessed often in a larger partition and keep the recent data in small, fast partitions.



Let's see partition pruning in action. If I run a query that only needs rows from a few partitions, the optimizer will only read from those partitions:

EXPLAIN PARTITIONS SELECT count(*) FROM City WHERE citydate BETWEEN '2009-01-01' AND '2009-08-01';



+----+-------------+-------+------------+------+---------------+------+---------+------+------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra |

+----+-------------+-------+------------+------+---------------+------+---------+------+------+-------------+

| 1 | SIMPLE | City | p3,p4,p5 | ALL | NULL | NULL | NULL | NULL | 4079 | Using where |

+----+-------------+-------+------------+------+---------------+------+---------+------+------+-------------+




Notice the optimizer realizes it only needs to use certain partitions to find the results, shown by the partitions column.



Now let me explain why I removed the primary key. There is a rule about using unique constraints with partitioning. The short answer is: you can't have a unique constraint on something that you didn't partition by. The reason is that when you insert a record and the uniqueness needs to be checked, we don't want to search through every partition to verify uniqueness. Indexes are local to the partition (global indexes may be implemented in the future). So you can only have a unique constraint if all the columns in the constraint are used in the partitioning expression.
PlanetMySQL Voting:
Vote UP /
Vote DOWN"

What's New in the MySQL Enterprise Fall 2009 Release? - Interview with Mark Matthews and Andy Bang

What's New in the MySQL Enterprise Fall 2009 Release? - Interview with Mark Matthews and Andy Bang: "We talked to Mark Matthews and Andy Bang, two core developers in the MySQL Enterprise Monitor team, to give us some insight into the new features in the latest MySQL Enterprise Release. You can find more about the MySQL Query Analyzer in our previous interview with Mark Matthews."

Has your blog been hacked?

Has your blog been hacked?: "

While not a MySQL topic, as most of my readers view my MySQL Blog, my WordPress blog has been hacked? Has yours?


Like many, I’m sure you may have read about it like at Wordpress blogs under attack from hack attack but I was surprised when my custom permlinks did not work.


Being surprised I looked at Administrator accounts, and I found that there was one more number then being displayed in the list. I had to dig into the database to find the problem.


mysql> select * from wp_users where ID in (select user_id from wp_usermeta where meta_key = 'wp_capabilities' and meta_value like '%admin%');
+-----+-------------+------------------------------------+---------------+------------------------------+---------------------------+---------------------+---------------------+-------------+--------------+
| ID | user_login | user_pass | user_nicename | user_email | user_url | user_registered | user_activation_key | user_status | display_name |
+-----+-------------+------------------------------------+---------------+------------------------------+---------------------------+---------------------+---------------------+-------------+--------------+
| 1 | admin | $P$BHZFK/prDplb/W/024yrH49JvAmmCE. | ronald | ronald.bradford@arabx.com.au | http://ronaldbradford.com | 2005-11-21 23:43:47 | | 0 | Ronald |
| 127 | ronald | $P$B..e75VtFsv9bUGj5H5NTiXXPQIitr1 | ronald | ronald.bradford@gmail.com | http://ronaldbradford.com | 2009-02-22 20:13:33 | | 0 | ronald |
| 133 | ChaseKent87 | $P$Bl8cVSzBums33Md6u2PQtUVY2PPBHK. | chasekent87 | | | 2009-09-05 06:36:59 | | 0 | ChaseKent87 |
+-----+-------------+------------------------------------+---------------+------------------------------+---------------------------+---------------------+---------------------+-------------+--------------+
3 rows in set (0.00 sec)

mysql> delete from wp_users where ID=133;
mysql> delete from wp_usermeta where user_id=133;

However the damage has been done, and an update to the recommend 2.8.4 is unlikely to fix the data corruption.


Being a good DBA I have a nightly backup of my database. Being a diligent system administrator, I have not 1 copy, by 3 copies of my system, one on my web site and two offsite.


The problem is I don’t keep older backups of my data, only a day old version.


PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Sunday, 13 September 2009

Memcached Functions for MySQL and Moxi: a great combination

Memcached Functions for MySQL and Moxi: a great combination: "Working at Northscale has been a lot of great fun lately! I have finally figured out how to get puppet to set up a stock Amazon instance with everything we need and have been impressed with how you can automate system setup with puppet. I remember when I worked at Grazr how much of a hassle it was for us to have to rebuild systems. Something like Puppet would have been a godsend.

Today I though I would post about how cool moxi is. If you don't know, moxi is a memcached proxy (http://labs.northscale.com/moxi/) which allows you to move any complexity of having to set up the list of memcached servers you are using. Also, moxi has some great features such as:

* De-duplication of requests for popular keys
* A front cache, L1 cache to avoid network hops
* Fire and forget SET (Like an async SET) this means 'set a value, but don't wait to know if it was successful'
* Time-outs for setting the maximum time for operations

This post will show you how moxi can be used with the memcached UDFs for MySQL (https://launchpad.net/memcached-udfs). It's really quite amazingly easy. I'll even thrown in compilation of moxi and setup of moxi.

First, obtain the source for moxi which can be found at:

git clone git://github.com/northscale/moxi.git

To build moxi, you will need both libevent and libmemcached. I'll leave it to you to obtain both of those. For libevent, just use the package for either centos or ubuntu. For libmemcached, use the source because many packages available for it are somewhat stale, and libmemcached is actively developed by Trond et al!

Now enter the moxi source directory and do the following:

patrick-galbraiths-macbook-pro:moxi patg$ sh autogen.sh
libtoolize...
aclocal...
autoheader...
automake...
autoconf...

./configure


make

sudo make install

At this point, moxi is ready to use. Now, depending where you have memcached running, this will determine which servers you proxy to. In this demonstration, I'm using 4 different memcached servers:

* Two locally within my network
* Two Northscale memcached Amazon EC2 images (http://labs.northscale.com/memcached-ami/, ami ami-bf4cadd6). These are great-- you just launch them and then *use* them!

So, to launch moxi (I changed the hostnames to protect the innocent)

moxi -z 33133=127.0.0.1:22122,192.168.1.118:11211,ec2-00-00-00-yy.compute1.amazonaws.com:11211,ec2-00-00-00-xx.compute-1.amazonaws.com:11211 -vvv

So, the syntax is quite simple:
moxi -z 'port I want to answer to'='host:port server list I am proxying for'

In this example, just as you can also specify with memcached itself, I'm running with the -vvv flag for verbosity because I'm a geek and this is a great way to see what moxi is doing. Also, I'm running so I'm using port 33133 to answer to.

So, I installed my UDFs on the latest version of MariaDB, which I have installed in /usr/local/maria:

bzr branch lp:memcached-udfs
cd memcached-udfs
sh config/bootstrap
./configure --with-mysql=/usr/local/maria/bin/mysql_config --libdir=/usr/local/maria/lib/mysql/plugin/
make
sudo make install
mysql -u root test < sql/install_functions.sql

At this point, the UDFs are ready to use!

The convenient thing here is that moxi makes it so you don't have to specify a long server list in the memc_servers_set() UDF:

mysql> select memc_servers_set('127.0.0.1:33133');
+-------------------------------------+
| memc_servers_set('127.0.0.1:33133') |
+-------------------------------------+
| 0 |
+-------------------------------------+


Now, I was able to start storing and retrieving data via moxi to my 4 memcached instances:

mysql> select memc_set('key1', 'test value');
+--------------------------------+
| memc_set('key1', 'test value') |
+--------------------------------+
| 1 |
+--------------------------------+
1 row in set (0.07 sec)

mysql> select memc_get('key1');
+------------------+
| memc_get('key1') |
+------------------+
| test value |
+------------------+
1 row in set (0.08 sec)


And of interest, observe the output of moxi:

cproxy_create on port 33133, downstream 127.0.0.1:22122,192.168.1.118:11211,ec2-72-44-62-67.compute-1.amazonaws.com:11211,ec2-67-202-38-195.compute-1.amazonaws.com:11211
cproxy_listen on port 33133, downstream 127.0.0.1:22122,192.168.1.118:11211,ec2-72-44-62-67.compute-1.amazonaws.com:11211,ec2-67-202-38-195.compute-1.amazonaws.com:11211
<40 server listening (proxy-upstream-ascii)
<40 initialized conn_funcs to default
<41 server listening (proxy-upstream-ascii)
<41 initialized conn_funcs to default
<41 cproxy listening on port 33133
<40 cproxy listening on port 33133
moxi listening on 33133 with 2 conns
<42 new proxy-upstream-ascii client connection
42: going from conn_new_cmd to conn_waiting
42: going from conn_waiting to conn_read
42: going from conn_read to conn_parse_cmd
<42 cproxy_process_upstream_ascii version
>42 VERSION 0.9.6
42: going from conn_parse_cmd to conn_write
42: going from conn_write to conn_new_cmd
42: going from conn_new_cmd to conn_waiting
42: going from conn_waiting to conn_read
<43 new proxy-upstream-ascii client connection
43: going from conn_new_cmd to conn_waiting
43: going from conn_waiting to conn_read
43: going from conn_read to conn_parse_cmd
<43 cproxy_process_upstream_ascii set key1 0 0 10
43: going from conn_parse_cmd to conn_nread
43 pause_upstream_for_downstream
43: going from conn_nread to conn_pause
assign_downstream
cproxy_add_downstream 0 4
cproxy_create_downstream: 127.0.0.1:22122,192.168.1.118:11211,ec2-72-44-62-67.compute-1.amazonaws.com:11211,ec2-67-202-38-195.compute-1.amazonaws.com:11211, 0, 7
release_downstream
check_downstream_config 1
check_downstream_config 1
assign_downstream, matched to upstream 43
<44 new proxy-downstream-ascii client connection
<45 new proxy-downstream-ascii client connection
<46 new proxy-downstream-ascii client connection
<47 new proxy-downstream-ascii client connection
46: going from conn_pause to conn_mwrite
assign_downstream, done
46: going from conn_mwrite to conn_new_cmd
46: going from conn_new_cmd to conn_waiting
46: going from conn_waiting to conn_read
46: going from conn_read to conn_parse_cmd
<46 cproxy_process_a2a_downstream STORED
46: going from conn_parse_cmd to conn_pause
>43 STORED
43: going from conn_pause to conn_write
<46 cproxy_on_pause_downstream_conn
46 release_downstream_conn, downstream_used 1 1
release_downstream
check_downstream_config 1
assign_downstream
assign_downstream, done
43: going from conn_write to conn_new_cmd
43: going from conn_new_cmd to conn_waiting
43: going from conn_waiting to conn_read
43: going from conn_read to conn_parse_cmd
<43 cproxy_process_upstream_ascii quit
43: going from conn_parse_cmd to conn_closing
<43 cproxy_on_close_upstream_conn
<43 connection closed.
<43 new proxy-upstream-ascii client connection
43: going from conn_new_cmd to conn_waiting
43: going from conn_waiting to conn_read
43: going from conn_read to conn_parse_cmd
<43 cproxy_process_upstream_ascii get key1
43 pause_upstream_for_downstream
43: going from conn_parse_cmd to conn_pause
assign_downstream
cproxy_add_downstream 0 4
cproxy_create_downstream: 127.0.0.1:22122,192.168.1.118:11211,ec2-72-44-62-67.compute-1.amazonaws.com:11211,ec2-67-202-38-195.compute-1.amazonaws.com:11211, 0, 7
release_downstream
check_downstream_config 1
check_downstream_config 1
assign_downstream, matched to upstream 43
<48 new proxy-downstream-ascii client connection
<49 new proxy-downstream-ascii client connection
<50 new proxy-downstream-ascii client connection
<51 new proxy-downstream-ascii client connection
forward multiget get key1 (3 0)
50: going from conn_pause to conn_mwrite
forward multiget nwrite 1 out of 4
assign_downstream, done
50: going from conn_mwrite to conn_new_cmd
50: going from conn_new_cmd to conn_waiting
50: going from conn_waiting to conn_read
50: going from conn_read to conn_parse_cmd
<50 cproxy_process_a2a_downstream VALUE key1 0 10
50: going from conn_parse_cmd to conn_nread
<50 cproxy_process_a2a_downstream_nread 0 200
50: going from conn_nread to conn_new_cmd
<43 cproxy ascii item response success
50: going from conn_new_cmd to conn_parse_cmd
<50 cproxy_process_a2a_downstream END
50: going from conn_parse_cmd to conn_pause
<50 cproxy_on_pause_downstream_conn
50 release_downstream_conn, downstream_used 1 1
release_downstream
43: going from conn_pause to conn_mwrite
check_downstream_config 1
assign_downstream


You can see (yes, this a lot of output) how moxi is talking to the 4 different servers, a simple set and get of the key 'key1'.. Also you see this concept of 'downstream' and 'upstream' -- to where the data is actually being stored versus where it is passed to moxi from, for instance a PHP or Perl client, or even another moxi.

So, as you can see, using moxi with the memcached Functions for MySQL (memcached UDFs), you gain even more convenience and ability to store data to memcached. Also, you've seen mention of Northscale's Amazon EC2 Machine Images (AMIs) and how you can easily add more memcached instances for your application's needs.
PlanetMySQL Voting:
Vote UP /
Vote DOWN"

Sike's shared items