Their limitation to this could be predominant and the fact that they can only artificially provide with data which was practically fed from third-party sources without any initiative to find, associate and use the most updated account. Presently, AI is definitely a little farther than what search engines could do when giving out information. Although, AI seemed to have changed how search engines deal with data entirely. Their large language models, unless prudently orchestrated either by AI or human, or hybrid entities, we shall say that, we'll continue to see their performance, in such context, to be stillborn or lagging, if not full of primitive suggestions and information which can be useful, somehow, to the humans. In the meantime, the original source and those with firsthand information, if still around, shall be sought out.
Why is it that we always place regulation as a responsibility, most primarily, of governments?
We think self-regulation has been of human's mode since the beginning of time, and entities working on advance technology or in the case of AIs, that without the usual apparent oversight, shall know their own work better and its effect and implications, if any, more than anybody else.
Did we always say, honesty is the best policy? In AI self-regulation, whether it's done by the AI enterprise itself or within its industry, that will be definitively brooding.
What do you think? (We will update this shortly but you can comment now.)
Comments