Behind the Reel: Neural Nets in 1988

I recently debuted my speaker's reel, four minutes of thumpa-thumpa "hire me!" that actually does a great job of explaining what I do, while keeping the energy high. This reel contains many short moments that are really significant to me, so I thought I'd elaborate on those moments in a quick series of posts.

At 0:58 in the reel you'll see this image, which is the cover of a report I wrote in 1988, when I was the neural networks analyst for New Science Associates, a small, unfriendly spinout from the 600-lb. gorilla in the tech-analyst space, Gartner. (Gartner bought New Science, which was founded by Gartner alumni, around the time I left the firm in 1992.) an image with no alt text You can download the full report here, where I've woven it into its context in my Brain.

There are two other moments in the reel that show the research my colleagues and I churned out, here: an image with no alt textand here: an image with no alt text The section with the green banner in the second photo is the tables of contents from each biweekly issue, stacked together so you could find the article you needed.

We did offer our research in Lotus Notes, but the Web was just a twinkle in Tim Berners-Lee's eyes at that point, not the public utility we rely on today. So most of our clients read our research on paper.

In the early days when we had one, then two research services (which turned into seven by the time I left in 1992), we would have a "stuffing party" when the issue returned from the printer. We'd get in our conference room, add addressed labels to 9x12 envelopes, and stuff the new issue into them, all the while commenting on the clients whose names we recognized. It was a very physical way of getting familiar with the client base.

Because our retainer research services offered answers to client inquiries, some of the names would trigger comments like, "damn, I owe her two inquiries!"

On finding your calling

My time at New Science was special in many ways, but the first big one was the moment I realized that: 1) I was a capable translator between the geeks and the business folk who needed what the geeks were building, 2) my instincts on what was up were pretty accurate, and 3) I could write that all down in ways that made sense.

All my prior jobs had been placeholders, ways to earn a living while figuring out what I was meant to do. This one clicked in place.

In retrospect, I was fortunate to land in a startup where I had a smart editor/boss in Ken Sonenclar, and a joyful, intelligent, collaborative band of colleagues who were as curious about the intersection of tech and business as I was (hi Karen, Mary, Mary, Neena and others!).

On watching neural nets grow up

I wrote that 1988 report during the second lifetime of neural nets. Their first lifetime started in the 1950s, in parallel with early research into expert systems. (See this video describing the history of neural nets and machine learning in general.) Then, in 1969, two respected AI researchers published Perceptrons, a book that "proved" that neural networks would never be useful. What they proved was that single-layer neural simulations weren't capable of learning and storing complex representations. That much was true, but they didn't foresee multi-layered ("deep") neural networks. Worse, they chilled research into neural networks for over a decade.

When I arrived as an observer in the late 1980s, researchers were newly motivated by books like Parallel Distributed Processing, which was published in 1986, and a bit of a boom ensued. In the frame of the bazillions of dollars that are chasing today's GenAI companies, that boom was but a blip. But our clients at New Science were interested in what the technology might do, and weak though those early models were, they were good at things that Expert Systems couldn't solve. For example, a neural net could distinguish between maple and oak leaves. It somehow distills the essence of "mapleness" and "oakness" from its training set in ways that Knowledge Engineers interviewing Domain Experts couldn't match.

Technological progress often moves in steps, as progress plateaus when it hits constraints. Perceptrons marked the top of neural networks' first step; tech limitations constrained neural nets for a long time after that. But many researchers saw the potential and kept at it.

Scroll forward to December 2022, when the combination of processor power, training data availability and algorithmic sophistication hit a sweet spot that exploded on the world stage.

I've seen my share of hype in the tech business, and I am worried about the amount of money that is chasing Generative AI these days, but I have to say that this boom has substance: GenAI is the real deal.

I don't mean that we've achieved Artificial General Intelligence and can stop worrying about warfare and hunger, but rather that a very useful form of intelligence has arrived, which will transform many industries and solve problems we are only starting to identify. And it's getting awfully close to being able to improve itself.

#neuralnetworks #techhistory #newscienceassociates #rethinkconstraints


This article is cross-posted on Substack here and LinkedIn here.


Pages that link to this page