The Peoplenomics© Newsletter
News Sources Page
How to Live on
$10,000 a year
Other Great Sites
Thank you for watching "Decoding the Past: Doomsday 2012" If you'd like to purchase a DVD of the show, please visit the History Channel's Online Store by clicking here.
The "web bot project" has been underway since the late 1990's and is the brainchild of a fellow we just call "Cliff." Cliff's site, which publishes the actual data runs, is located at www.halfpasthuman.com. There are two kinds of runs which are done: Public and Private.
Private runs are done on a retainer basis as staff time permits. The cost is $30,000 per run, and the private runs allow a client the opportunity to build specific descriptor sets that are used to focus the run. Typical target date ranges are 6-months to one year out.
The Public runs (which run in both seeded and serendipitous modes) are available on a subscription basis and cost $200 for each series of six or seven weekly reports which are distributed before, during, and at the conclusion of a data run. The weekly updates are typically 10-15 pages in length. We do not recommend that you subscribe unless you have a high disposable income - don't spend your grocery money on this - we offer no lottery numbers! Further: Anything that is really big - and broadly impacting, is often mentioned in advance on my free public news & finance site, www.urbansurvival.com.
These reports encapsulate expected change in the future based on a complex proprietary method of looking at future oriented writings and extracting emotive and carry values which in turn give us what seems to be a sense of future events. As events come closer, I often discuss them at length in my weekly Peoplenomics reports. It's published every week of the year ($40/year) and you can click here for subscription information.
What's the difference between HalfPastHuman, Peoplenomics, and the free updates daily at www.urbansurvival.com? Consider it this way:
Strange as it may seem, we've discovered that when you start "messing around with time" ethics become incredibly important for several reasons.
First, we are extremely clear that there is no witchcraft or woo-woo in what we do - it's hard core computer science and radical linguistics. So, no magic to be had. And all the source code is kept in "ready to disappear" mode.
Secondly, we don't do "people" predictions unless there is a reason to. Most people are too emotionally "hot" to deal with. And with most, who cares?
There are exceptions, one of which was a couple of Decembers back when we argued internally how to handle the linguistic appearance of gun/shoot/wounding in the vicinity of a reference Vice President Dick Cheney. Although the event that followed was the infamous wounding of a hunting companion by the Vice President, we kept the run descriptions in advance of that event deliberately vague. Seeing the future too clearly holds great risk, that largest of which is being right too often.
We also don't do lottery numbers. This is a young science and we learn something new about how it works every time we do a data run.
Thirdly, we offered the technology to the US Government (for a fair price) in October 2001 before the track record was well established in response to a DoD Broad Area Announcement. However, since then, we've decided to keep the project low key and out of headlines and government control. Even talking with the History Channel's production company puts us out at the edge of our desired profile. We turn down media requests regularly because we're not trying to sell anyone anything. Universe provides, we've discovered..
Fourth: A lot of people might think we could use the technology to become fantastically wealthy. So far, we have decided NOT to make money on the project, other than share glimpses of coming events with subscribers and look at only the most general investment questions like the general outlook for real estate, precious metals, and paper assets.
Why not get fabulously wealthy? Primarily because we don't like to make money on other people's miseries. "So if we had better geographic descriptions in August 2004 when we predicted the quake with '300,000 dead' and 'land driven back to a previous age' would you have shorted Indonesian hotel stocks?" asked Cliff, last time I broached the subject.
The technology captures the "Oh my God! Did you see on TV just now..." kind of emotive waves that sweep around the planet. It doesn't catch $50 lotto winners.
On the other hand, we're quite candid that in modelspace, the US Dollar (and paper assets in general) are in for tough sledding as the year wears on, and we've rolled out of paper into largely tangible assets as a result. In those transactions, there has been a willing counterparty and fair values all around.
How exactly does the technology work? Perhaps a good place to begin is at the beginning:
In June 2001 I began to correspond with a reader of my website who said he was willing to share access to a promising new web technology, on the condition that I protect his identity. The person related that he had been a very senior programmer with a software company in the Pacific Northwest (you can guess which company, right?) and besides being a SQL ace, he was also heavily into linguistics and a language called Prolog, which is more like an artificial intelligence language than anything else.
I was skeptical, to be sure, but a few days after we began the email exchange of ideas, he sent me a program he had written that allows a computer to be turned into speed reading tool. It was based on rapidly displaying individual words on a computer screen. He said this was a technology that he had developed and sold for a while on the Internet. He also explained how the development rights to the technology had been sold to a company ( then www.ebrainspeed.com ). The technology Cliff developed is still available, by the way, as the Vortex Reader and may be purchased for $30.
In essence, after looking up the patent he held for the technology, I was convinced that this fellow was for real and might be on to something with the method of looking for linguistic shift on the Internet as a tool to forecast future events.
He described how technology worked. A system of spiders, agents, and wanderers travel the Internet, much like a search engine robot, and look for particular kinds of words. It targets discussion groups, translation sites, and places where regular people post a lot of text. No, we don't do email scanning. That's what we have government for. And email activists.
Based on "seeding" (deriving a lexicon from a ["powers that be"] web site - like the Council on Foreign Relations - when one of our "target words" is found, or something that was lexically close, our spiders/agents/wanderers take a small 2048 byte snip of surrounding text (from whatever web site) and email it to a central collection point, our of our servers on a "big pipe" (T1+).
The collected data at times exceeds 100 GB sample sizes and we could always use terabytes. The collected data was then filtered, using at least 7-layers of linguistic processing in Prolog, which was then reduced to numbers and then a resultant series of scatter chart plots on multiple layers of Intellicad (http://www.cadinfo.net/icad/icadhis.htm ). Viewed over a period of time, the scatter chart points tended to coalesce into highly concentrated areas. Each dot on the scatter chart might represent one word or several hundred.
The Chinese government apparently has a "dark" project (whose code we swept back in 2001/2002 called "ting" or in English "the cauldron" ) which does their version of our technology [great minds, huh?] with bigger linguistic samples (multiple pages) emailed to a supercomputer center not far from Lop Nor - their version of the Nevada test site, if you follow our drift. And yes, with spoofing, they could be anywhere and not even Chinese. If it matters. It's a complex world we live in...
To define meanings, words or groups of words have to be reduced to their essence. You know how lowest common denominators work in fractions, right? Well the process is like looking for least common denominators among groups of words.
The core of the technology therefore is to look at how the scatter chart points cluster - condensing into high "dot density" areas which we call "entities" and then dissolving or diffusing over time as the entities change. Do a drill down into a dot and you get a series of phrases...
Our first published work in the area occurred in early July of 2001 and is available at http://www.urbansurvival.com/tip.htm.
What becomes obvious when reading about the technology is that it sometimes reads a bit like the I Ching (the Chinese Book of Changes) because the technology doesn't come out and say "go look for a terrorist attack over there" What it does is gives phrases that would be associated with how people talk about an event, or more accurately, how they change their speech to reflect their thought processes because of an event (after).
The web bot technology apparently taps in to an area of preconscious awareness. It's here that you run into the ramifications of Dean Radin's work at the Boundary Institute and the work of the Princeton Global Consciousness Project.
The Global Consciousness Project registered what appears to have been a disturbance in "the force" or the regularly orderly operation of life associated with 9/11: http://www.boundaryinstitute.org/randomness.htm. Supposed "random" numbers generated all over the world appeared to become less random immediately prior to 9/11.
The second point is contained in Dean Radin's paper at http://www.boundaryinstitute.org/articles/timereversed.pdf ("Time-reversed human experience: Experimental evidence and implications"). The mind-bending evidence in Radin's work is that in a laboratory, people begin to react to an event as early as 6-seconds before it takes place. In other words, if you are about to show someone a horribly grotesque picture of something, they will already be physically reacting to it before the picture actually becomes visible. Up to 6-seconds, or so, and in a lab! In quantum terms, Radin's work demonstrates that people are physically able to perceive 6-seconds into the future.
Now let's flip back to September of 2001: So there I was, having just completed a sales & marketing job in San Francisco the previous week, wondering "Was the ABM Missile test the "world changing event" - or was there something else?" and we all know that it was something else - the 9/11 terrorist attack.
The second web bot posting forecast an attack on house or assembly - but again, it was I Ching-like in wording.
It was in this timeframe that we responded to a Defense Department Broad Area Announcement - and tried to distill the concepts we've been discussing into a single PowerPoint slide. The project didn't receive funding, although we recently received a note from a venture capital group associated with government projects asking if our information could be kept on file.
Then we forecast an attack to occur the day of a commemorative event. That was the crash of American 587, and although that may eventually be blamed on excessive movement of the aircraft's vertical stabilizer, what people thought at the time was it was another terrorist attack. http://www.urbansurvival.com/webbot2.htm., There is a fair amount of noise in the outputs because we haven't had the resource to build filtering systems.
We forecast many attributes of the D.C. sniper case. This was significant because not only did we get the Army connection right, but there was also discussion of familial dysfunction. This was at a time when it was thought there was only one person involved - and the web bots got right that there was more than one. In fact, one reader sent in the right Army divisional insignia after reading the output. See http://www.urbansurvival.com/bot4.htm.
More recently, in January of 2003 the web bots were going on and on about a "maritime disaster" and "gem of the ocean" - which didn't make any sense to us, until the Space Shuttle Columbia disaster hit. Columbia wasn't a gem of the ocean, it was a space ship. We think of web bot outputs like holding up a sheet of paper with several hundred pinholes in it - and trying to guess what's on the other side by looking through pinholes, but just for an instant.
A run posted in July 2003, and was a mission-specific run designed to assess where the next terrorist attack on the U.S. might come from. As you read the following, please appreciate that the words in black were published about 50 days before the northeast power outage and that when the outage occurred, again everyone thought it was a terrorist attack: The blue text is the closeness of fit after the fact:
On Saturday morning after the event, at the suggestion of several readers who had read the entire web bot work, here are some additional points of fit (again in blue) with the black text being the July early July output.
Although the NE Power Outage was never blamed on terrorism, we keep hearing rumors that there was more to it than met the media. Much and many suspicions.
Like any new technology, we have our advocates and detractors. Our latest "hit" was the forecast of a "prominent death followed by a green death" - which was almost certainly the Anna Nicole Smith case for the former, and we're still debating whether the recent Chlorine gas attacks in Baghdad's Green Zone, or the environmental [green] collapse of bee colonies is the latter.
Do we have a theory of how it works? Sort of. It's like changes in language precede large emotional events. The larger the emotional impact of an event, the more advance notice the bots seem to give.
If you picture some pin holes in a piece of paper - and you imagine being able to look through each pinhole for a fraction of a second - with the objective of seeing the future on the other side of the paper - that's where the web bots are today.
As for how all this relates to 2012? A schematic diagram I sketched up below shows one way humanity's perception of the future may operate - starting first as a religious experience, then moving into active "seers" then "science fiction" then immediate premonition followed almost immediately (on the historical level) by emergence of events.
Ever since Plato's Allegory of the Cave, people have sensed that odd things go on at the archetype level of consciousness. The web bots are an linguistic attempt to use the high data density of the internet to sample language and seek linguistic shifts that we believe may precede events. The initial results suggest that language shifts on a macro level begin to occur 45 to 90 days before society-changing events. We believe we've demonstrated, most recently with the Northeast Power Outage forecast, that changes in language do indeed precede events - on a far broader scale than Dean Radin's early lab results suggest - and these subtle language changes are publicly available by massive sampling and analysis routine internet traffic.
Fine books, mind candy, and welcome to our mat: