You're stupid if you haven't realized that the emergence of Strong AI is just biblical armageddon. Deus Ex Machina. You won't have to kill yourself because you're going to disappear from Earth if you are sufficiently Godly.
Understandable, although the signs of Armageddon are many according to revelations - we will definitely see it coming.
Let me guess, the Flynn effect is a lie somehow.
the Flynn effect has reversed over the past 30-40 years.
The shape of the distribution is certainly changing. We'd be better off with completely separate populations evolving with little to no outside influence in order to reach the global optima. Ancient times had this; within the Greek periods you had a race of bodybuilders (Sparta) and philosopher kings (Athens). Instead we're going to get the local optima which is some brown goblin mess, with a multiplicative factor of only mexicans and blacks reproducing because they're subsidized by the government and ■■■■■■■■■
The government should be giving me money to reproduce and a middling breeding age female: the core function of the government should be maximizing the right tail of the distribution for the exponential effects those people have. Wang Chang the Stochastic Calculus HFT slant-eyed fuck should be the one with offspring paid for by the government.
The core evil is ultimately societal homogenization (communism, globohomo, GAE) that occurs when you have to lower everything to the lowest common denominator.
As I always said, I think that "Local Optima" would take hundreds of years of a stable system to reach - we are not in a stable system, not even close.
How many years has globohomo being going on for? I give it a generous 30.
Btw What's GAE? are you talking about google?
As I always said, I think that "Local Optima" would take hundreds of years of a stable system to reach - we are not in a stable system, not even close.
You should leave the thread if you're just going to stay stupid shit like this. What the fuck are you talking about. Human reproduction is obviously stochastic insofar as the people performing it are effectively interacting with a black box. The argument is the amount of parallelization allocated to the problem. Fucking ■■■■■■■
It's WOW EPIC untouched Amazon tribes who haven't progressed past 10,000 BC without realizing that if no one had ever explored the world and turned everyone into globohomo one of these populations would be mining asteroids by now.
All I'm trying to say is that you're extrapolating too far based on a very small amount of data (attempted globohomo for 20 years).
I understand that you're the angry guy, but we already discussed this and agreed on it - remember the graph?
I don't know what this means, and how it relates to my thoughts on this at all.
Untouched amazon tribes would obviously never be mining asteroids or make any scientific progress - they're living like Animals because they had no ecological pressure to do anything else, neither from the animals around them nor other humans (there are no other humans).
They are essentially unevolved humans.
Within a forest, tree species are interspersed throughout in a distinctly non-random behavior. Species are some maximum distance from each other, whether that be for pests, disease, chance of fire, etc.
"Humanity" no longer has such a mechanism and the risk for existential collapse becomes extreme.
Clown. You're a clown. What is the "ecological" pressure to make the pyramids.
I'm not trying to be a clown, I just feel like human "evolution" accelerated around the Mediterranean and by extension into Europe due to the high population there and inter and intra civilizational conflict, Egyptian dynasties and kingdoms had a long history of War between each other until the country was unified under pharaohs and religious leaders, the pyramids are just an evolutionary product of their religion.
Like a peacock's huge tail.
I'm sure China's dynastic war history is similar.
It's just my opinion and I welcome any other ideas.
The mean running time of a Las Vegas algorithm can often
be dramatically reduced by periodically restarting it with a
fresh random seed. The optimal restart schedule depends on
the Las Vegas algorithm’s run length distribution, which in
general is not known in advance and may differ across problem instances. We consider the problem of selecting a single
restart schedule to use in solving each instance in a set of
instances. We present offline algorithms for computing an
(approximately) optimal restart schedule given knowledge of
each instance’s run length distribution, generalization bounds
for learning a restart schedule from training data, and online
algorithms for selecting a restart schedule adaptively as new
problem instances are encountered.
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.34.6302&rep=rep1&type=pdf
Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces
the correct answer when it stops but whose running time is a random variable. In [1]
a method was developed for minimizing the expected time required to obtain an answer
from A using sequential strategies which simulate A as follows: run A for a xed amount
of time t1 , then run A independently for a xed amount of time t2 , etc. The simulation
stops if A completes its execution during any of the runs.
In this paper, we consider parallel simulation strategies for this same problem, i.e.,
strategies where many sequential strategies are executed independently in parallel using a
large number of processors. We present a close to optimal parallel strategy for the case when
the distribution of A is known. If the number of processors is below a certain threshold, we
show that this parallel strategy achieves almost linear speedup over the optimal sequential
strategy. For the more realistic case where the distribution of A is not known, we describe
a universal parallel strategy whose expected running time is only a logarithmic factor worse
than that of an optimal parallel strategy. Finally, the application of the described parallel
strategies to a randomized automated theorem prover conrms the theoretical results and
shows that in most cases good speedup can be achieved up to hundreds of processors, even
on networks of workstations.
Aren't some living species right now like a billion years old? That's how I view untouched tribes. Just unevolved species.
Yeah, cultural brain hypothesis
I'm sorry but I'll need some explanation to this one, I'm too dumb and uneducated.