Instead of multiple requests, you can make a single request, having previously collected all the values into an object or array that can be sent
Labels |
Added:
No Code Attached Yet
|
Title |
|
Completely unrelated to issue as this is a bit out of my depth, but I figure I'd throw this out;
You should avoid doing for($i=0;$i<count($value);$i++){
, having the count function there will cause it to be run every loop. It's better to save the result of count() to a variable and reference that variable in the for loop.
Completely unrelated to issue as this is a bit out of my depth, but I figure I'd throw this out; You should avoid doing
for($i=0;$i<count($value);$i++){
, having the count function there will cause it to be run every loop. It's better to save the result of count() to a variable and reference that variable in the for loop.
Regarding count() inside "for", I absolutely agree with you! I think the developers will guess when they will fix it. If there are of course...
Yes, your code does less database queries and overall it most likely is less code to be executed. However it is more code to write at this point in the code and more code to understand when maintaining it. Generally, wasting a few queries here isn't a big deal, since we are in a (simple) write-context. Writing to this table should happen several thousand times less than reading from it. Writing a hundred custom fields values to that table per save would still be a nearly neglectable performance issue. In this case, the readability and maintainability of the code is preferable over raw speed.
Yes, your code does less database queries and overall it most likely is less code to be executed. However it is more code to write at this point in the code and more code to understand when maintaining it. Generally, wasting a few queries here isn't a big deal, since we are in a (simple) write-context. Writing to this table should happen several thousand times less than reading from it. Writing a hundred custom fields values to that table per save would still be a nearly neglectable performance issue. In this case, the readability and maintainability of the code is preferable over raw speed.
And what about group operations? For example, a lot of materials with fields of the list type will need to be copied to another category.
What if my list type fields contain many values?
And I can have a lot of such fields....
As I said, even writing a hundred values for one field wont take a really noticeable time. It would be a hundred INSERT queries, each inserting one row. That is really quick. Your solution on the other hand could run into a different issue: Your query could become to large. I've run into that issue in Smart Search, where inserting a 1000 terms in one query failed because MySQL couldn't process such a big query.
As I said, even writing a hundred values for one field wont take a really noticeable time. It would be a hundred INSERT queries, each inserting one row. That is really quick. Your solution on the other hand could run into a different issue: Your query could become to large. I've run into that issue in Smart Search, where inserting a 1000 terms in one query failed because MySQL couldn't process such a big query.
Indeed, about a big queries and not even argue ... I somehow didn't think about it.
In this case, the issue can be closed. Your arguments were stronger!
But still, the queries in the loop are not good. I call this phenomenon "Machine gun")) And here "Machine gun" is really necessary.
Status | New | ⇒ | Closed |
Closed_Date | 0000-00-00 00:00:00 | ⇒ | 2022-02-17 21:53:14 |
Closed_By | ⇒ | salitygen |
In my plugin, I put together a request in this way.....
I apologize for the shit code, I wrote it as best I could, but it works more economically
There is also a question, (table #__fields_values) why does the item_id field have a varchar type with a size of 255 characters, there would be enough code there, and I think it was necessary to use int 10 ???