Welcome to Hidden Answers, the Main Deep Web Forum. Please follow the rules for each category to keep this forum clear and useful.
0 like 0 dislike

While since 2022, the topic of AGI has gained some traction in the public sphere with ChatGPT, opinions as to how it will unfold vary wildly, and are often driven by first-guess fallacies, judging by the past and not the future, ignorance of exponential growth, herd mentality, blind faith in media and institutions, and illogical or non-existent threat assessments and threat policies.

Quite dangerous superhuman AI systems are feasible by 2024-2027, and many individuals and shady corporate entities, will gain the power to wield them openly or covertly with a delay of maybe 2-4 years.

While it is true that positive outcomes are also likely, we can consider the following negative ones:

  • the stock market might crash permanently
  • the internet might be rendered inoperable for years
  • AGI might choose to destroy us, by means that we do not understand in advance
  • countries might wage large-scale wars in fight for diminishing resources

All those outcomes are very serious in nature, and cannot simply be answered with ignorance and willful disbelief by any reasonable person. The awesome powerful impact of AGI is undiniable and will be absolutely unprecedented in history. It must be addressed with reasonable preparedness.

Anyone here has had similar thoughts about this issue?

in Politics, wars, problems by Master (33.3k points)

Please log in or register to answer this question.

...