As an Asimov fan I thought I'd remind you that your paraphrasing of the 3 laws renders them unworkable as a robot can not respond to humanity only the humans it encounters and interacts with. Your rewrite would cause an internal conflict that would fry a positronic brain - think, a robot is told to use an electrical appliance on a grid fueled by a brown coal burning power station. It can't, as all humanity would be placed at risk by such cavalier pollution unless not doing so would place the human at risk. It wouldn't take Daneel to work that out!
I'm surprise Asimov wrote the 0th law as it does render the other three useless.
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
He did weasle around it by having Daneel say:
"In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction."
Hence, Asimov renders the 0th law ato the level of a plot complication and not
the axiom that the original three have, once extrapolated, become.