AI existing in itself doesn't directly clash with anything related to ethics. It's more about how AI technology is used... which sounds way too similar to political arguments surrounding gun laws but I digress....
Basic example: scams being enabled by AI technology; e.g., deepfake impersonations, AI generated scam websites, etc.
From an ethical standpoint it becomes confusing whether the victim should prosecute the person behind the computer, the software developer, or both parties. Should AI technologies be restricted (from the developer side) to fit some standard lawful ethical code? If so, where should the line be drawn, and how should it be standardized? How much, if any, monitoring should there be for how people interact with AI? There simply is no correct or agreed upon answer to any of these questions.
If everyone in the world had a "perfect" moral/ethical compass and these values were shared and the exact same, then AI and ethics can 100% co-exist without any problems. That's unrealistic though, so our only option is to either a) become president and pass our own laws or b) wait 5-20 years n find out
late to the thread, but i hope this helps you and anybody that is also curious or confused