The year is 2020 (which is not a long way off, as of this report's publication date) and you have just given your A.I. recruiting software program a voice directive: "Delete all resumes dated earlier than 2018". Because of a design vulnerability in the voice-activated program's semantic capabilities, it deletes the 1,000+ resumes in your database, since each contains an attained educational qualification or work-experience date that is earlier than 2018.
Delayed by this, you later rush to lunch—in part, to try to eat to forget—at the new all-bot sushi bar, where the robot you tell to "bring six pieces of any sushi, and make it snappy" (having comparable voice-command functionality) promptly returns with six pieces of neatly riceencased rubber bands deftly placed on ready-rigged mouse traps.
Your commands were not vetted beforehand by any of the A.I. systems' software designers, engineers or manufacturer, by another or the same A.I. system, by a colleague or yourself, and therefore did not convince anyone or anything else that your directives ("wishes”" had only one, innocuous interpretation and equally innocuous method(s) of execution.