Quantcast
Channel: MySQL Forums - InnoDB
Viewing all articles
Browse latest Browse all 1957

Frequency of COMMITs (1 reply)

$
0
0
I'm processing 6 years worth of web server logs into two InnoDB tables, one for unique IPs and one for server requests. So far, I have 3.4 million unique IPs and 260 million server requests.

I wrote the program that is doing these INSERTions so that autocommit is off and it does a COMMIT every time it has inserted 100 records into the server requests table. This is just a finger in air compromise between never committing and committing on each insertion.

1) Do you think this is optimal?

My second question relates to backup. It's a bit annoying that no one seems to be able to say definitively what files are involved in InnoDB tables. With this number of rows, a logical backup isn't practical. Looking at Windows file modification dates, I'd say that this update process is only changing these files:

Directory of D:\mysql\data

21/11/2011 21:23 294,139 Brisk2.err
21/11/2011 21:23 5 Brisk2.pid
22/11/2011 14:44 115,865,550,848 ibdata1
22/11/2011 14:44 56,623,104 ib_logfile0
22/11/2011 14:31 56,623,104 ib_logfile1

2) Is it sufficient to backup just those above? (There are .frm files associated with these tables but they don't change)

Viewing all articles
Browse latest Browse all 1957


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>