Future of AI
How can the increasing dependence on AI-generated answers affect the integrity of online knowledge, especially when AI mistakes or speculations get repeated and ultimately solidify into accepted facts? What solutions could be implemented to prevent this cycle and ensure factual reliability and avoid biase?
What solutions could be implemented to prevent this cycle and ensure factual reliability and avoid biase?
I don't know about solutions but when enough companies put hard dependencies on 3rd party AI and that AI either starts crashing enough, getting slow enough, getting tainted by poisonous data enough or charging for priority access and even that starts getting slow then people will have to re-think those decisions and contemplate moving the functions in-house which is not a trivial decision.
Yes, that’s a good perspective. But developing an in-house solution would be extremely expensive and not easily feasible. I wonder if human correctors could become a future job when the training data quality gets too bad.