How to you get AI generated models to forget information that it was trained on, when that information was later retracted by it's authors?

During a human lifetime we learn facts that are later determined to be incomplete or wrong.

We effectively mark in our memory that retraction or modification has taken place. 

Over the past year numerous research papers has be retracted due to errors or fraud by the authors.

How do we know that these retracted research papers have not been used during the AI training cycle?

Then how do we get the AI developed model to forget this defective information? 

Peter Brooks

Palm Bay Florida 

  • Ed Almond replaced Nigel Fine (who retired) at the IET , over a year ago. His PA is Verity Whitworth.

    Peter Brooks

    Palm Bay 

  • Yes, I was aware that Nigel Fine had been succeeded following his retirement. However, I must admit that the name of his replacement had slipped my mind. Thank you for reminding me that it is Ed Almond who has stepped into the role, with Verity Whitworth as his PA.

    Best Regards 

    Andrew

  • There are days when information comes in faster than I can Keep up. Today is one of those days.

    There is a new research paper (Ethics and information Technology) by Michael Townsen Hicks of Glagow University at https://doi.org/10.1007/s10676-024-09775-5 that you might be interested in. 

    Peter Brooks

    Palm Bay 

  • [deleted]
  • Thanks Thumbsup 

  • It is indeed enlightening to learn that the term ‘hallucinations’ has been employed to describe AI output anomalies. I agree this terminology is misleading.

  • I would hazard a guess that like a human, AI would have a continuous learning cycle rather than simply train for a job and then never update its knowledge on that subject. Can you imagine if you simply stopped learning anything once you'd left school or that Doctors or Engineers never kept up with changes or recent developments in their fields of interest? 

      are you able to add any thoughts to this discussion? 

    Lisa

  • Hello Lisa:

    There is a problem in continuous learning associated within your selective field of interest (it's like going deep down a rabbit hole) that is  "over specialization".

    One has to be a "generalist" making sure that one is not blindsided by advances in related areas. Otherwise one becomes unemployable. 

    I started work in vacuum (valves) tubes which died, but was able to made the move into solid state devices.

    Second when developing models to explain a physical processes one has to consider what is happening is other non-related fields, such as biology. 

    Peter Brooks

    Palm Bay

  • Dealing with withdrawn or corrected information is a significant problem for humans let alone AI. Papers or articles that are published with a blaze of glory are normally corrected or withdrawn in some obscure part of the publication.

    Retraction Watch tries to document this but how many people actually look at it. May be it should be hardwired into AI systems.

    https://retractionwatch.com/the-retraction-watch-leaderboard/top-10-most-highly-cited-retracted-papers/

  • Hello Roger:

    Yes I have used this site from time to time but what it considers major (by being highly cited) does not mean it is the most important retractions.

    When the IET quotes (sometimes verbatim) a summary from a University on a recent research paper, I attempt to download and scan the actual paper.

    Frequently the public release from the University bear little relationship to what is in the actual research paper  - It is just an ADVERT for the University. 

    Peter Brooks

    Palm Bay