During a human lifetime we learn facts that are later determined to be incomplete or wrong.
We effectively mark in our memory that retraction or modification has taken place.
Over the past year numerous research papers has be retracted due to errors or fraud by the authors.
How do we know that these retracted research papers have not been used during the AI training cycle?
Then how do we get the AI developed model to forget this defective information?
Peter Brooks
Palm Bay Florida