2

My boss is having me create a database table that keeps track of some of our inventory with various parameters. It's meant to be implemented as a cron job that runs every half hour or so, but the scheduling part isn't important since we've already discussed that we're handling it later.

What I'm want to know is if it's more efficient to just delete everything in the table each time the script is called and repopulate it, or go through each record to determine if any changes were made and update each entry accordingly. It's easier to do the former, but given that we have over 700 separate records to keep track of, I don't know if the time it takes to do this would put a huge load on the server. The script is written in PHP.

asked Dec 8, 2015 at 23:16

2 Answers 2

3

700 records is an extremely small number of records to have performance concerns. Don't even think about it, do whichever is easier for you.

But if it is performance that you are after, updating rows is slower than inserting rows, (especially if you are not expecting any generated keys, so an insertion is a one-way operation to the database instead of a roundtrip to and from the database,) and TRUNCATE TABLE tends to be faster than DELETE * FROM.

answered Dec 8, 2015 at 23:44
Sign up to request clarification or add additional context in comments.

2 Comments

I second this, do not be concern about the load, 700 records is nothing. Just do it the way it is easier for you. Consider that if you built systems that count of that data, if you delete the records, if one of the systems queries the DB in that time (before the data is recalculated) it might get nothing. So running an update might take more time etc, but it might be critical.
@MihaiP. That, too. Absolutely any practical concern takes precedence over performance when we only have 700 rows.
0

If you have IDs for the proper inventory talking about SQL DB, then it would be good practice to update them, since in theory your IDs will get exhausted (overflow).

Another approach would be to use some NoSQL DB like MongoDB and simply update the DB with given json bodies apparently with existing IDs, and the DB itself will figure it out on its own.

answered Dec 8, 2015 at 23:48

2 Comments

700 new ids every half hour will get exhausted at about the same time that hell will freeze over.
Updating on primary index is not that bad, it is almost like insert pretty much

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.