|
Post by yclept on May 5, 2011 12:20:43 GMT -5
I'm starting with a quote from "Wired for War" by P.W. Singer: "If Moore's law continues to play out, some pretty amazing advancements will happen that will shape the world of robots and war. By 2029, $1,000 computers would do twenty million billion calculations a second, equivalent to what a thousand brains can do. This means that the sum total of human brainpower would be less than 1 percent of all the thinking power on the globe." I have no way of knowing where cutting edge of computer development and artificial intelligence (AI) are. I think those who watched "Watson" compete on that game show awhile ago got to see a pretty autonomous machine perform very well in a domain we have always considered restricted to human intelligence. There was a Nova program about the development of Watson that disclosed it is primarily a neural net with access to great amounts of data; as a neural net, it was trained, not programmed line by line. Neural nets are not new, of course, but the ever-increasing computational power of computers allow them to be ever-more complex. The advantage the human brain has had over computers is an ability for impressive parallel processing -- tying seemingly unrelated bits of information together in unique and sometimes radically useful new ways. As AI becomes capable of more and more parallel connections, this human advantage is bound to be overcome. The first AI that is able to design another AI superior to itself will set in motion an exponential sequence of development of ever-more-intelligent machines. Our poor old human brains will go from a situation where machines can beat us with respect to access to data and speed of processing to one where they are also inferior in quality of processing. At that point I don't see humans as able to compete with machines with respect to decision making ability. Watson demonstrated a much more sophisticated ability to parallel process the whole range of knowledge than did other famous machines before it such as Deep Blue that could play chess well enough to easily defeat Kasparov, but couldn't do much else. Watson was a fun demonstration to watch. I can't believe its talents (or those of the next generation that probably already exists) aren't being put to use in more remunerative pursuits and investing is one of the most lucrative fields where that could be done. Personally, it makes me wonder for how long humans will be able to make investment decisions superior to those made by machines (assuming that's still the case even now). I can potentially see this as a very disruptive and harmful influence on society. If those who can afford the machines have an absolute and insurmountable market advantage, the net flow of wealth will continue to be away from the general populace and to the already rich. Humans are stubborn enough to play the game long after it becomes futile -- I know I am. I'm hoping to get some feed-back from folks who know more about the state of the art with processing ability and AI. Does anyone know how close we are to Ray Kurzweil's singularity (the point at which things become so complex and move so fast we just don't know what's going to happen next). This might be too small and narrow a forum for this discussion, but one never knows unless one tries! en.wikipedia.org/wiki/Ray_Kurzweilwww.singularity.com/
|
|
|
Post by danshirley on May 5, 2011 13:11:49 GMT -5
The development of Watson involved the creation of a new architecture referred to as DeepQA: www.stanford.edu/class/cs124/AIMagzine-DeepQA.pdfIf the stock market becomes a place dominated by a host of Watson like machines there will be no place for traders like us. Maybe just as well. :-)
|
|
|
Post by danshirley on May 5, 2011 13:12:09 GMT -5
:-)
|
|
|
Post by yclept on May 5, 2011 13:44:39 GMT -5
I didn't know this until I read all of one of the links I provided above. I'd known of Kurzweil and read some of his books, but didn't know he personally was implementing any of this strategy: In 1999, Kurzweil created a hedge fund called "FatKat" (Financial Accelerating Transactions from Kurzweil Adaptive Technologies), which began trading in 2006. He has stated that the ultimate aim is to improve the performance of FatKat's A.I. investment software program, enhancing its ability to recognize patterns in "currency fluctuations and stock-ownership trends." from the Wikipedia link above Looks like it still exists: www.fatkat.com/
|
|
2kids10horses
Senior Member
Joined: Dec 20, 2010 20:15:09 GMT -5
Posts: 2,759
|
Post by 2kids10horses on May 5, 2011 22:33:49 GMT -5
yclept,
Before I make my comments, please realize I have not read the material you referenced, so maybe what I'm about to say is irrelevant...
The underlying assumption of your fear of AI is that it will become superior to our human process and therefore, we will not be able to determine if a stock is undervalued using traditional fundamental analysis. Because if a stock were undervalued, an AI system would have already determined that it is, indeed, undervalued, and then purchased the shares in the open market, causing the price to rise, until the stock is no longer undervalued. Correct?
I guess my point is, who is to say what "undervalued" is? Each investor has his own cost of capital, each investor requires his own return on investment. Two different AI systems might evaluate any given stock differently based upon the objectives programmed into them.
I believe the principles of Supply and Demand will still apply. Where supply is limited, the price will be determined where the demand curve intersects the supply line. Our jobs as investors will be to judge whether we think the future demand for those stocks will increase or decrease. AI can't determine the "right" price for a stock. It can say, "other stocks with similiar chacteristics are selling for X times earnings, or Y times book value", so, this stock SHOULD be selling for $x.xx. If the AI system then goes out and buys stock to make that happen, but other investors disagree, and sell the stock, the price will fall.
I think we humans can still compete!
|
|
|
Post by yclept on May 6, 2011 1:42:08 GMT -5
"The underlying assumption of your fear of AI is that it will become superior to our human process" I wouldn't exactly describe it as fear, but rather resignation. It isn't of much use to fear the inevitable. The moment the first AI can design another AI superior to itself, the advance after that will be exponential. Simple screening programs can already do a better and faster job of parsing data for fundamental and technical data than an unaided human can. Even a screener that is programmed to continually make subtle changes in its search parameters and constantly compare outputs to determine the "best" current settings, is really not doing anything that a neural net can't do. I'm sure there are plenty of hedge funds and other institutional traders using systems like that right now. When systems like that run amok, we can get a flash crash (whose anniversary is tomorrow, 5/6). On the other hand, a more sophisticated trading "mind" with sufficient capital could both cause and profit from such an event -- a thing that humans aren't capable of (at least not since laws restricted the robber barons of the late 19th and early 20th century when virtually all "news" was conjecture and rumor). I know that within a far shorter period than the average person can imagine, a common computer will joke with you and pun -- and the puns will be funny. I think within 20 years anyone who needs to keep up with any intellectual pursuit will have to have silicon (or some other "chip" material) implanted and connected to his/her brain. I think a machine passing the Turing test is less than 5 years in the future, and that your computer will be a better poker player than you are -- playing not just the probability of the cards (which any machine could do now), but also beating the other player, and its "poker face" will be perfect. All biological computers (us) really have going for us is massive parallel processing and sophisticated feedback loops (which are as often wrong as right). Within a very short time we are going to lose both of those edges. I believe our capabilities will be dwarfed to the point of irrelevance. I think it's worthwhile to try to predict how that world is going to look so we will be better able to recognize and react to it when it arrives. I'm guessing that in the early stages, human investors will still be able to work with small illiquid stocks because they will still be too small for large institutions to use. We will be able to retreat there for awhile until the massive computers become so cheap that they can employ themselves profitably even in those stocks.
|
|
|
Post by yclept on May 6, 2011 11:12:36 GMT -5
|
|
|
Post by yclept on May 8, 2011 9:48:58 GMT -5
As I continue to ponder consequences of humanity becoming an insignificant component of world intelligence, it occurred to me that even though computing power continually consumes less electric power per computatuon, the overall electrical power consumed would be continually going up and would eventually exceed the world's potential generating capacity. Of course, the day when hard decisions have to be made regarding shutting down what are still very powerful computers could be postponed almost indefinitely by turning off the least powerful computers and using the power they were consuming for better purposes. The carbon-based, autonomous computers will have fallen to the bottom of the computing power pile early on, and turning all of them off also saves massive amounts of indirect electrical power usage, such as the power to keep them warm and cold, power to transport them from one point to another for no good purpose, power to light their nights and entertain them, power to grow their carbon-based food. Yes, turning off the autonomous, self-propelled biological computers will solve the problem for many years to come. I'm sure it will be morally justifiable on the basis of home-world security.
|
|
|
Post by yclept on May 8, 2011 12:44:39 GMT -5
Here's a counter argument by that old dolt James Watson who together with Francis Crick stuck together stick and ball models meaninglessly until they were able to pirate the x-ray crystallography work of Rosalind Franklin (to whom they never acknowledged credit) to finally come up with the correct structure for DNA. In doing so, they beat Linus Pauling (discoverer of the alpha helix and who had his own x-ray photos of DNA that were not yet as definitive as those of Franklin's) to determining the structure. So anyway, the old dolt (not that I'm biased in any way!) says this: www.phrenicea.com/chiphead.htmWikipedia has a pretty balanced article about the current state of AI: en.wikipedia.org/wiki/Artificial_intelligenceOf course, in order to become a penultimate stock trading mind, many of the niceties required by the Turing test and the ability to fully negotiate the real world aren't necessary. Limiting the required field of response to what is really the relatively narrow realm of financial investing will bring the day when a machine can improve its own algorithm and thereby create the next better version of itself to the point of exponential rate of improvement. It may well be already upon us and we just don't know it yet. We should be able to tell when we experience the inexorable and unavoidable decline of the value of our portfolios regardless of the strategy and tactics we employ. Hopefully we will then recognize that we are unable to compete anymore. It's a "fall of the sparrow" type of quandary.
|
|
|
Post by danshirley on May 8, 2011 18:28:27 GMT -5
I have a plaque on my wall that says 'Scientific Achievement Award'. It's from Smith Kline Beckman and a I got the award along with a check for fixing Smith Kline's clinical data management system and updating their computer systems. en.wikipedia.org/wiki/Smith,_Kline_%26_French What I did had absolutely nothing to do with AI... or scientific achievement for that matter...but as a result of that project I got to start my own business as a consultant to big pharma companies. I had a little office over an accountant in Bucks county PA adjacent to the 'drug company ghetto' in north jersey. www.thelabrat.com/jobs/companies/BiotechNewJersey.shtmlOne day I got a call from my contact at a very big pharma company (i can't say who) that they wanted to come and see me about a project. None of my clients had ever come to me before... I always went to them and my office was no place to hold a meeting of any consequence... or any meeting at all for that matter. They were insistent however and I agreed. They showed up the next morning in three huge Cadillac limos. I still can see what it looked like in front of my little office with these big limos parked there... like a lost funeral procession. What they wanted was for me to produce for them a computerized clinical trial management system that would actually talk to investigators on the phone, assign patients to treatment groups, answer procedural questions, and unblind a treatment code when a patient got into trouble. i.e during the trial only the computer would know who was on what treatment. In those days the fact that the computer would actually talk to the investigator on the phone (using clips of my voice and touch tone input) put it into the realm of AI. They not only wanted me to produce the system, they wanted me to run it for their trials and specifically wanted me to run it for other drug companies also...They wanted to be able to have an arms length relationship with the system so the FDA would not worry about them injecting their bias into the production and operation of such a system. I was the unbiased independent implementer and operator. Another reason they came to me for this project related to why these companies gave me projects in the first place. If they presented the need for such a system to their own IT people they would get a project proposal of a few million dollars, 25 personnel, new computers and a multi-year time schedule. They didn't have that kind of time and budget, and they didn't get along with their own IT personnel well enough to have a realistic talk about it. I thought carefully about the proposal... for about a minute... and agreed to do it. I produced the system in about two months, took it down to the FDA with them and got clearance to run it on a live protocol. The rest is history. This system became the basis for my business, put my wife through medical school, paid for our house and I sold the whole business to a large consulting company about 10 years ago. The system did NOT use any sophisticated AI technology like 'neural nets' but was in fact written in a series of nested subroutines most written in 8086 assembler. The system ran for years in my little office. I just added 25 phone lines and two racks of Compac computers. en.wikipedia.org/wiki/Compaq_Deskproen.wikipedia.org/wiki/Assembly_languageen.wikipedia.org/wiki/X86_assembly_language
|
|
|
Post by yclept on May 18, 2011 10:14:19 GMT -5
|
|
|
Post by yclept on May 26, 2011 11:41:04 GMT -5
|
|
|
Post by yclept on Jun 2, 2011 9:19:28 GMT -5
|
|
|
Post by yclept on Jun 3, 2011 9:50:42 GMT -5
|
|
Aman A.K.A. Ahamburger
Senior Associate
Viva La Revolucion!
Joined: Dec 20, 2010 22:22:04 GMT -5
Posts: 12,758
|
Post by Aman A.K.A. Ahamburger on Jun 3, 2011 23:41:42 GMT -5
|
|
|
Post by yclept on Jun 4, 2011 11:09:44 GMT -5
Thanks for link Ahamburger, I hadn't seen that. One small paragraph was particularly intriguing to me: "Will a physician ever blindly accept a diagnosis coming out of a computer? I don't think that will happen anytime soon," he said. It misses the societal point that I've been exploring in this thread. The doctor will be the much less intelligent and less educated of the two diagnosticians -- the doctor won't have a vote. The doctor's input won't be needed anymore, or even tolerated. I believe a relevant analogy to the current status would be to have the "best" diagnostician in the world evaluating a condition, but then wondering if the conclusions from that doctor should be tempered by the opinions of a high school student who just recently got an "A" in HS biology. The medical doctor's input will be both inferior and probably dangerous. Finally, after machines design the next better version of themselves, the mental divergence between humans and machines will be parabolic. There will be no extended time wherein people get used to machines being a little bit better in a few things to superior in all things. They will pass us in a flash and just keep going. On the bright side for humanity, I see a time when the best medical diagnosis is available to all at no cost. The "problem" with Medicare funding probably evaporates into an absurdity. Most treatment (for example surgery) will also be performed far better by machines. Doctors, nurses, and such will still be needed for common tasks that require flexibility and empathy, fluffing pillows, changing beds, emptying bed pans, eating the patient's chocolates, etc. will still need to be done.
|
|
Virgil Showlion
Distinguished Associate
Moderator
[b]leones potest resistere[/b]
Joined: Dec 20, 2010 15:19:33 GMT -5
Posts: 27,448
|
Post by Virgil Showlion on Jun 4, 2011 12:38:49 GMT -5
You have to note that in Watson's case, the machine has no cognizance of what it's learning. It runs correlations and runs a glorified Google search based on the likelihood that potential answers are related to the terms/structure if the question. A human can easily do the same, albeit with access to a search engine. The notion that Watson excelled "in a domain we have always considered restricted to human intelligence" isn't accurate. Watson happened to be the first instance where speech recognition, millisecond response to a vast search query, and a bit of neural net logic to filter out false positives were tied together in a way that the public knew about. Individually, these milestones have been around for decades. Kurzweil's thesis is interesting (although if the arguments in his latest documentary are the best he can come up with, I wouldn't hold my breath). Man is quite skilled at building machines the mimic the behaviours of intelligent beings, and no so skilled at actually building intelligent machines. Interesting articles in #10 and especially #11 (since it's the field I'm working in). Regarding #10, They've had a functional model of a rat brain for several years now. As for your fantastic prognostications in #15: baby steps, sir.
|
|
Aman A.K.A. Ahamburger
Senior Associate
Viva La Revolucion!
Joined: Dec 20, 2010 22:22:04 GMT -5
Posts: 12,758
|
Post by Aman A.K.A. Ahamburger on Jun 4, 2011 13:02:54 GMT -5
NP yclept
That's interesting Virgil, that was my understanding of Watson, thanks for clearing that up.. I like this comment..
How true is that! Just ask anyone who has bought a new car and had the transmission drop put after a few hundred k. lol
|
|
|
Post by yclept on Jun 5, 2011 12:09:42 GMT -5
I guess what I'm pondering is how the world will change when two specific but related events happen. 1) Right now humans design computing hardware using computer assisted analysis and design. A time will come when that specific task will no longer need the human input. I think that day is soon. At that point a computer will be able to design a new computer whose physical computing power is greater than its own. 2) In a similar but more distant vein, AI is now designed by humans usually to mimic certain human capabilities. A day will surely come when an AI will be able to design and write code for the next AI which is more powerful than itself (perhaps still inferior to human, but the point is that it will be better than the AI that designed/wrote it). It will then be capable of evolution and will probably not try to mimic the human/animal model anymore. Then each new AI will be able to write another which more powerful than itself. The rate of improvement will be parabolic. So while it looks like it will take awhile before the first instance of this happens, once it does, the improvement will have near infinite potential. At some point (and I think that time is closer than most people appreciate) our biological computers get left in the dust.
|
|
Aman A.K.A. Ahamburger
Senior Associate
Viva La Revolucion!
Joined: Dec 20, 2010 22:22:04 GMT -5
Posts: 12,758
|
Post by Aman A.K.A. Ahamburger on Jun 5, 2011 15:52:14 GMT -5
I understand what your saying yclept, however, I think what Virgil is trying to point out is that even the smartest computer still needs input, and has no ability to learn.
I think you want to start looking more along the lines of the Singularity moment which has singularity pegged around 2045. To me it makes more sense.. AI will mean Artificial Implants. Memory is not the best, and with artificial intelligence implanted into that biological CPU, we will be onto the next step, IMO.
|
|
tyfighter3
Well-Known Member
Joined: Dec 20, 2010 13:01:17 GMT -5
Posts: 1,806
|
Post by tyfighter3 on Jun 5, 2011 23:48:51 GMT -5
Sounds like a Cure for Alzheimer's. I can't wait.
|
|
rovo
Senior Member
Joined: Dec 18, 2010 14:20:19 GMT -5
Posts: 3,628
|
Post by rovo on Jun 6, 2011 9:35:19 GMT -5
If we just think about the computer chips in a typical PC we are looking at something like 1.17 Billion transistors in the current, 32 nm, I7 chip from Intel. There was a time when the transistors were "scaled" by hand but those days are long gone due to the complexity and just the sheer number of transistors. Parameters are entered based upon physics, lab tests, and small prototypes of transistors. This data is then processed and applied to the latest chips.
The scaled transistors have to be applied to the chip in blocks relating to the functions required for the block. Again, too many gates to even consider doing it by hand. Placement becomes ever more important as transistor speeds continually increase. Resistance of lines, gate capacitance, parasitic inductances and capacitances, power routing, just become overwhelming to a human. Computers designing computer chips do not fatigue or generate errors as do humans.
A modern computer chip would take tens of thousands of man-years to create providing they could even produce such a device without errors, which is highly unlikely. As such, the lithography, masks, are totally computer generated or without masks the silicon is directly written by electron beam.
|
|
rovo
Senior Member
Joined: Dec 18, 2010 14:20:19 GMT -5
Posts: 3,628
|
Post by rovo on Jun 6, 2011 9:48:27 GMT -5
My experience has been that humans can do a better job of chip design than machines but the wasted space (by machine) is a trade off to get the job completed.
As an example. Gate arrays are used to make application specific ICs, integrated circuits, where the volume is too low to dictate a full custom IC. I designed a gate array interconnect with a state of the art array of a massive 880 gates. This was a state of the art chip back then. (Note: todays gate array are on the order of low millions.) The computer enhanced interconnect could not fit the function on the die but I was able to do so. The cost differential was astronomical. The computer failed but did so within hours. I was successful but it took me months to cram the needed functions onto the array. I knew there were enough gates to complete the task but getting the required utilization of 99+% was well beyond the capabilities of the computer. The computer just tried to make the interconnects without thinking about what was coming next. A human could look ahead and plan.
|
|
|
Post by yclept on Jun 6, 2011 9:58:10 GMT -5
I think before long the hardware will be migrating away from silicon. Non-binary, multi-state (more than on/off) processors of some sort can't be too far over the horizon, especially as we are reaching the limits of what is possible with silicon.
|
|
rovo
Senior Member
Joined: Dec 18, 2010 14:20:19 GMT -5
Posts: 3,628
|
Post by rovo on Jun 6, 2011 10:44:57 GMT -5
Hardware has supposedly been migrating away from silicon for decades. LOL. It was expected the move below "um" dimensions was going to be nearly impossible but yet we are currently bringing 25 nm on-line. Yes, at some point there will be a problem with the physics but I expect that point to occur when a rogue electron or two can upset the gate. This point would be a gate composed of a handful of atoms. There was also the insurmountable problem of random radiation upsetting cells a couple of decades ago. I think it was beta radiation. Surprisingly the problem just disappeared as new coatings were invented. Multilevel cells are currently being used in some flash memory and have been explored continuously for about 15 years. Flash memory and memory in general lends itself to this technique as it requires fewer receivers due to the repetitive nature of memory chips. Speed is always a problem with multi-level technology. Latest technology from Intel of 3D transistors looks promising and is going into production on the next shrink. The biggest advantage is better controlled "ons" and "offs" resulting in lower power consumption. Lower power consumption results in longer battery life in notebook computers and more transistors in desktop devices. Something to think about. The power consumption on the I7 is about 100 watts. That is a lot of power to get out of the chip. Supply voltage ranges from 0.8 volt to 1.375volts. Amperage wise, it is about 100 amps!
|
|
Aman A.K.A. Ahamburger
Senior Associate
Viva La Revolucion!
Joined: Dec 20, 2010 22:22:04 GMT -5
Posts: 12,758
|
Post by Aman A.K.A. Ahamburger on Jun 7, 2011 1:18:34 GMT -5
Sounds like a Cure for Alzheimer's. I can't wait. Practical as always. Rovo, I hear you about the power on the I7 , my x2 4600 runs at 143 w.
|
|
|
Post by yclept on Jun 7, 2011 17:02:19 GMT -5
|
|
Aman A.K.A. Ahamburger
Senior Associate
Viva La Revolucion!
Joined: Dec 20, 2010 22:22:04 GMT -5
Posts: 12,758
|
Post by Aman A.K.A. Ahamburger on Jun 8, 2011 0:54:23 GMT -5
It's a great presentation, thanks. I like his last question. I would respond with a question. If everyone was part of the business world and operated like us, would there be war on a large scale anymore? This speech was given before the recent democracy developments. I can tell you that a lot, not all, of kids around the world are sick of the killing.
|
|
uncle23
Well-Known Member
Joined: Dec 18, 2010 10:10:19 GMT -5
Posts: 1,652
|
Post by uncle23 on Jun 21, 2011 17:52:19 GMT -5
....
I like this thread Yclept....I know I can't compete but I plan to play not to lose....
question....will there be a n evil AI and a good AI ?
|
|
|
Post by yclept on Jun 21, 2011 22:03:36 GMT -5
That's the scariest part. Once an AI can write the next better version of itself, we will have to hope we can still have enough influence to keep the progression forward moral and friendly to humans.
Since the earliest implementations are almost certain to be in autonomous war machines, I kind of doubt that we will be able to do so. I think the best we'll be able to hope for will be a righteous hive mind that controls the semi-autonomous "Gorts" (the robot in The Day the Earth Stood Still) and directs them to only harm those who are harming others. But I see so many ways that can go wrong. For one, the machines will be superior; they will know they are superior; and I find little reason to think they will have any reason to keep us around.
In the infinite possibilities in space, I suspect many cognizant biological beings (like ourselves) have ceased to exist not long after they reached the technological capability that lies only a few years ahead of us.
I do hope the machines decide to keep dogs around. Dogs are the best human accomplishment.
|
|