ipc - Process on linux ignoring resource limits -


I had asked some questions about the development of an online judge on the Stack Overflow some time ago and I had some good answers meet. I have started working on developing one and I think there is a major flaw in my code.

User submitted source will be compiled on the server This is done by GCC in a forwarded process. Now I have set a resource limit on CPU time and more than that, a SIGXCPU signal The process is sent. All right now, but suppose that someone writes a malicious code that handles the SIGXCPU code, then they will continue to run on the server and possibly open a way to take someone to the remote control on the server.

So what am I missing here? There should not be anything that can be stopped.

The original prototype of the compiled module goes like this:

 int main () {int pid; Int rv; If (! (Pid = fork ())) {struct rlimit limit; Friendly (RLIMIT_CPU, and Border); Limit.rlim_cur = 1; Setrrimat (RLIMIDCPU, and Border); // execl () with GCC and source file name} and if (pied) {wait (& rv); } And printf ("faulting error \ n"); Return 0; } 

and if there is anything like the

 zero handler (sign signum) in the source file (if (signum == SIGXCPU) printf ("caught SIGXCPU signal \ n"); } Int main () {sign (SIGXCPU, handler); while (1); return 0;} 

... this is a big problem

On Linux, in particular, what you can call a user. But Linux will send the sigkill to the process if the hard limit is reached (as you set Contrary to the soft boundary), and that Riya will terminate.

(Remember though, you really really run Crooked around his stuff)


Comments

Popular posts from this blog

c++ - Linux and clipboard -

What is expire header and how to achive them in ASP.NET and PHP? -

sql server - How can I determine which of my SQL 2005 statistics are unused? -