Conversation
|
Hi, thanks for the note. I would highly recommend not restricting the memory limit but reducing parallelization if you face memory issues, as it can lead to many false negatives in the grading. |
|
Thanks for your reply.
Based on above, I still suggest to add memory limit. |
Thanks for following up. This is interesting, I think this behavior was not common for earlier models (since they used to perform poorly on hard problems) when I designed the autograder.
Yes, but I am not entirely sure what a good memory limit is when computing the results, given the small variance across memory limits (particularly when you do parallel execution). I will investigate this. Thanks for your thoughts! |
|
Hi, I think this idea is helpful and I do face many OOM issues when testing it with gpt-4o-mini. |
Hi,
I find sometimes there are memory explosion issues when using LCB. Memory usage continuously increases until my server (max 1TB RAM) becomes IDLE.

I make these two changes, and memory explosion issues have never shown again.
Hope this fix looks good to you. :)
Thanks.