Non utilizzare i loop soprattutto su quella scala in RDBMS.
Prova a riempire rapidamente la tua tabella con 1 milione di righe con una query
INSERT INTO `entity_versionable` (fk_entity, str1, str2, bool1, double1, date)
SELECT 1, 'a1', 100, 1, 500000, '2013-06-14 12:40:45'
FROM
(
select a.N + b.N * 10 + c.N * 100 + d.N * 1000 + e.N * 10000 + f.N * 100000 + 1 N
from (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) a
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) b
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) c
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) d
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) e
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) f
) t
Sulla mia scatola (MacBook Pro 16 GB di RAM, Intel Core i7 da 2,6 Ghz) ci sono voluti circa 8 secondi per completare
Query OK, 1000000 rows affected (7.63 sec) Records: 1000000 Duplicates: 0 Warnings: 0
AGGIORNAMENTO1 Ora una versione di una procedura memorizzata che utilizza un'istruzione preparata
DELIMITER $$
CREATE PROCEDURE `inputRowsNoRandom`(IN NumRows INT)
BEGIN
DECLARE i INT DEFAULT 0;
PREPARE stmt
FROM 'INSERT INTO `entity_versionable` (fk_entity, str1, str2, bool1, double1, date)
VALUES(?, ?, ?, ?, ?, ?)';
SET @v1 = 1, @v2 = 'a1', @v3 = 100, @v4 = 1, @v5 = 500000, @v6 = '2013-06-14 12:40:45';
WHILE i < NumRows DO
EXECUTE stmt USING @v1, @v2, @v3, @v4, @v5, @v6;
SET i = i + 1;
END WHILE;
DEALLOCATE PREPARE stmt;
END$$
DELIMITER ;
Completato in ~3 min:
mysql> CALL inputRowsNoRandom(1000000); Query OK, 0 rows affected (2 min 51.57 sec)
Senti la differenza 8 secondi contro 3 minuti
AGGIORNAMENTO2 Per velocizzare le cose possiamo usare esplicitamente transazioni e inserimenti di commit in batch. Quindi ecco una versione migliorata dell'SP.
DELIMITER $$
CREATE PROCEDURE inputRowsNoRandom1(IN NumRows BIGINT, IN BatchSize INT)
BEGIN
DECLARE i INT DEFAULT 0;
PREPARE stmt
FROM 'INSERT INTO `entity_versionable` (fk_entity, str1, str2, bool1, double1, date)
VALUES(?, ?, ?, ?, ?, ?)';
SET @v1 = 1, @v2 = 'a1', @v3 = 100, @v4 = 1, @v5 = 500000, @v6 = '2013-06-14 12:40:45';
START TRANSACTION;
WHILE i < NumRows DO
EXECUTE stmt USING @v1, @v2, @v3, @v4, @v5, @v6;
SET i = i + 1;
IF i % BatchSize = 0 THEN
COMMIT;
START TRANSACTION;
END IF;
END WHILE;
COMMIT;
DEALLOCATE PREPARE stmt;
END$$
DELIMITER ;
Risultati con lotti di diverse dimensioni:
mysql> CALL inputRowsNoRandom1(1000000,1000); Query OK, 0 rows affected (27.25 sec) mysql> CALL inputRowsNoRandom1(1000000,10000); Query OK, 0 rows affected (26.76 sec) mysql> CALL inputRowsNoRandom1(1000000,100000); Query OK, 0 rows affected (26.43 sec)
Vedi tu stesso la differenza . Ancora> 3 volte peggio del cross join.