Except that in most cases you don't care about the startup overhead because
it only happens once. The only time it matters is in vanilla-CGI programs and
the like where the program is run once per request. Almost everything, though,
runs the program once and leaves it running to handle multiple requests, and
usually it's started as part of starting the entire system so it's up and
running and ready to accept requests before the system's open for business and
you never see the startup delay.
Plus, none of this part of the process
has anything to do with the virtual machine's interpretation of instructions. It
hasn't actually begun to execute a program, so none of the run-time
optimizations at issue could have been used at this point (unless Oracle wants
to take on the FSF over the gcc compiler's code generation, and good luck with
that). [ Reply to This | Parent | # ]
|