This error means your client tried to run a query but the connection to the MySQL server was cut while the server was still working on the query or trying to send results. It can be caused by timeouts, big packets, network problems, server crashes, or inefficient queries. Below I’ll show how to diagnose and fix it with clear, practical steps.
Real-life scenarios for MySQL Error 2013
- Uploading a very large file / BLOB from your app — the packet is too big.
- A long-running SELECT or UPDATE (no index) that takes minutes — a network device times out.
- Server ran out of memory or crashed while processing the query.
- Network drop or firewall/load-balancer closed an idle connection.
- Replication: master or slave has smaller
max_allowed_packet
and replication fails.
How to diagnose MySQL Error 2013
- Reproduce the query from the server itself (to rule out network): SSH to DB server and run the same query using
mysql
client. If it runs there, the problem is network/client side. - Check MySQL error log for clues:
- Common places: `/var/log/mysql/error.log` or `/var/log/mysqld.log`.
- Example: `sudo tail -n 200 /var/log/mysql/error.log`
- Check if MySQL was killed / OOM (system logs):
- `sudo journalctl -u mysql -n 200` (or check `/var/log/syslog`)
- Check key MySQL variables (from any client):
```sql
SHOW VARIABLES LIKE 'max_allowed_packet';
SHOW VARIABLES LIKE 'net_read_timeout';
SHOW VARIABLES LIKE 'net_write_timeout';
SHOW VARIABLES LIKE 'wait_timeout';
```
- Profile the query:
run `EXPLAIN <your SELECT>` to see if it needs an index.
- Try a smaller subset of the query (LIMIT 10) — if that works, the problem is volume or streaming.
Step-by-step fixes for MySQL 2013 Error
1) Increase max_allowed_packet
(common fix for big BLOBs / large results)
Check current:
```sql
SHOW VARIABLES LIKE 'max_allowed_packet';
```
Temporarily increase (takes effect immediately):
SET GLOBAL max_allowed_packet = 64 - 1024 - 1024; -- sets to 64MB
Permanent: edit MySQL config (/etc/mysql/my.cnf
or /etc/my.cnf
) and add under [mysqld]
:
[mysqld]
max_allowed_packet=64M
Then restart MySQL:
- Debian/Ubuntu:
sudo systemctl restart mysql
- RHEL/CentOS:
sudo systemctl restart mysqld
Note: Set value according to your needs (64M or 128M are common safe options).
2) Increase net_read_timeout
and net_write_timeout
These control how long MySQL waits for reading/writing network data.
Temporarily:
SET GLOBAL net_read_timeout = 300;
SET GLOBAL net_write_timeout = 300;
Permanent: add to [mysqld]
:
net_read_timeout=300
net_write_timeout=300
Restart MySQL to apply permanent changes.
3) Increase client / driver timeouts
Your language driver (PHP, Python, Java) may time out early. Set connection/read timeouts in your DB client. Example approaches (conceptual):
- PHP (mysqli / PDO): increase connect timeout and use streaming/unbuffered queries.
- Python (mysql-connector / pymysql): pass
connection_timeout
orread_timeout
/write_timeout
.
(Exact code differs by driver — search your driver docs for “read timeout” or “connection timeout”.)
4) Stream large results instead of loading them all at once
If the query returns a large dataset, stream results to avoid client memory issues and avoid big single packets.
PHP (mysqli example — streaming):
$mysqli = new mysqli($host, $user, $pass, $db);
$result = $mysqli->query("SELECT big_column FROM big_table", MYSQLI_USE_RESULT);
while ($row = $result->fetch_assoc()) {
// process row one by one
}
$result->free();
Python (mysql.connector example — unbuffered cursor):
cnx = mysql.connector.connect(user='u', password='p', host='h', database='db')
cursor = cnx.cursor(buffered=False)
cursor.execute("SELECT big_column FROM big_table")
for row in cursor:
process row
5) Optimize the query (often the real root cause)
- Run
EXPLAIN <query>
; add indexes if table scans appear. - Fetch only needed columns (avoid
SELECT -
). - Use pagination (
LIMIT ... OFFSET
) for large selects. - Rewrite heavy joins or split big updates into smaller batches. 6) For large imports, use file-based methods
- Use
LOAD DATA INFILE
for CSV bulk import — it’s faster and uses smaller protocol overhead. - If importing SQL dump, use a client with increased
max_allowed_packet
:
mysql --max_allowed_packet=64M -u user -p database < dump.sql
7) Check network devices and hosting idle timeouts
If your server is behind a load-balancer or proxy (AWS ELB, Azure, Nginx proxy), the device may drop idle connections:
- Example: AWS ELB has an idle timeout (default 60s). If your query or transfer exceeds it, connection is dropped. Solution: increase the idle timeout on the load balancer, or avoid long idle periods by streaming or using keepalive settings. 8) If replication is involved
Ensure max_allowed_packet
and relevant timeouts are set on both master and slave. Replication threads can fail if packet sizes mismatch.
9) Check server health (memory & CPU)
If MySQL process is killed by the OS (OOM killer), queries fail. Check:
top
/htop
for memory usage.dmesg
or/var/log/syslog
for OOM messages.- Increase server RAM or tune MySQL memory settings if necessary.
Example: Full small workflow (practical)
- Try running the query on the DB server:
mysql -u user -p -e "SELECT ..."
- If it works on server but not from client → network / client timeout.
- Check
max_allowed_packet
:
SHOW VARIABLES LIKE 'max_allowed_packet';
- If small, increase it temporarily:
SET GLOBAL max_allowed_packet = 67108864;
- If still failing, enable logging and tail error log:
sudo tail -f /var/log/mysql/error.log
- Try a streamed query from your application (use unbuffered/streaming approach).
- Optimize the SQL (EXPLAIN → add index or chunk the query).
- If using a load balancer, raise idle timeout.
Quick checklist to paste into your ticket / notes
- Run query on DB server (ruling out network)
- Check MySQL error log for crash messages
- Increase
max_allowed_packet
(try 64M) - Increase
net_read_timeout
/net_write_timeout
(try 300s) - Use streaming/unbuffered queries in app
- Optimize query (EXPLAIN) and add indexes
- Check server memory / CPU / OOM logs
- Check load-balancer / proxy idle timeouts
- If replication, sync
max_allowed_packet
on both sides
When to contact your host / DBA
- You see MySQL crashes in logs or OOM messages.
- You cannot change
my.cnf
because of managed hosting. - The issue is intermittent and you suspect network infrastructure (cloud provider / corporate firewall).
‘Error 2013’ is basically: “the connection was cut while MySQL was working.” Fixing it usually means one of:
- Allowing larger packets / longer timeouts,
- Streaming or chunking big data, or
- Fixing the slow/inefficient query or server resource problem.
Happy Coding !