Home The American Dream: 17 Years of Engineering Software

The American Dream: 17 Years of Engineering Software

Seventeen years ago, on April 10th 1991, a plane landed in John F. Kennedy airport. That plane
had just crossed the Atlantic carrying, amongst others, passengers escaping the crumbling Soviet empire.
One of whom was me. I walked off that plane with a first ever taste of Coca-Cola in my mouth, a lame teenage mustache,
and not a clue about what to expect.

When my sister emailed me on April 10th 2008 and reminded me of our immigration anniversary, I was suddenly overwhelmed with memories.
A lot has happened since then. 17 years is such a long time that it is difficult to fathom.
I am left with bits and pieces of memories and the person that I am today. Each memory by
itself is rarely strong and profound. A single memory is a just a dot in your timeline. But when you
pile the memories on top of each other, you get a bigger and better picture. Here is to everyone who made my American
Dream come true and all of you who helped me grow as a software engineer.

Lehigh University: The Basics

I went to engineering school: Lehigh University in Bethlehem PA. My credits from Ukraine
got me into the sophomore year and I immediately declared a math major. I was always a good student,
but I never loved math (can you blame me?). It was always too abstract and too detached from reality.
I knew how to manipulate formulas, but I had no idea why I was doing it.

My mom was concerned about my future. She kept telling me that there is no money in Math
and that I should learn computers – the thing of the future. Back then I was scared of
computers. My only prior encounter with them was back in Ukraine where a computer was a humongous piece of metal.
The only program that I’d written in Basic to multiply matrices had an infinite loop in it and
an angry professor had to reboot the whole machine to stop it. I got a C in that class.

So when I realized that the path to my happiness was in computers, I was kind of scared.
To top it off, the first programming class that I took was Introduction to Computer Engineering,
focused on coding in Assembly language. The final project was to write an editor in Assembly 8086;
and for over three weeks I was trying all possible combinations of letters and digits that
could make the program run. I got it, but it was really like monkeys typing Shakespeare.

Strangely enough, that did not stop me and I then took Data Structures in Pascal. My professor, Dr. Adair Dingle,
was probably the reason I stuck with programming. For the first time I was fascinated with computer science, got 100
on a test and coded something that actually ran. She was great and made me believe that I can do it. So I declared a minor in Computer Science.

In my senior year I took a Systems Programming class from a guy named Stephen Corbissero.
He was not a professor, but he was the best teacher in the CS department because he actually knew how things worked.
He could code in C and he knew Unix inside out. I was scared of him and of all electrical engineers that loved him.
But I really wanted to learn C, so I took the class. As a final project we had to write a Unix Shell.
It was hard for me, really really hard. I spent weeks in the lab working on this class. In the end I got an
A- and learned that I can plunge through hard problems if I keep at them. And thanks to this class, I also
got the skills needed to get my first job.

Goldman Sachs: Motif, C and Passion

In April of 1994 I had 3 offers. The first one was from Goldman Sachs in New York to work on Wall Street.
My second offer was from IBM in Virginia to work on an aeronautics project and the last offer was
to join the programming staff of the pharmaceutical giant Merck. I did not want to get clearance nor
was I excited enough about Merck, so I packed my bag and went to where the action is at – New York City.

Goldman has always been an amazing company and back in 1994 it was still privately held. It was famous
for hiring smart, capable college kids and then making them work really hard, while paying good salaries and fantastic end of year bonuses.
During one of the interviews, I was asked to explain how Hashtables worked. To this day this is my
favorite introductory technical question.

But Goldman had no illusions about our skills. College graduates where expected to have
only theoretical knowledge, and so for the 2 months during the summer we were put through a training program called NAPA (new associate
programmer analyst). The main objective was to make sure that we get out knowing how to program in C.

Since most of us had no idea
what was the difference between char* and char** (the latter one was just scary), there was a lot of work to be done.
Not only did we have to learn C well, we also needed to learn the X Window environment – a de facto standard on Wall Street
in the early nineties. X Window came out of MIT and was a set of amazing client-server libraries for building
graphical applications. We learned the raw X Window library, the layer above Xt and the widget layer called Motif.
I really did not understand how everything worked, but I got a sense of how powerful abstractions, libraries and layers can be.

My first project was to work on an account reconciliation system. Lots of systems in any financial institution are focused
on reconciliation. Since any discrepancy can cost the company millions of dollars, the correctness of all books
is of paramount importance. Back then the system ran a nightly batch that transferred data into a relational database.
The interface was written in Motif and allowed managers to flip through thousands of bits of information. It had a striking
resemblance to Excel – a table with columns that could be sorted and searched. But it needed to be custom, because Wall Street
was all about custom IT.

I spent 2 years working on Financial systems in Goldman, mastering C and X libraries, picking up Tcl/Tk, learning
SQL and Sybase and a development environment called TeleUSE
with its C-based scripting language D. In the process, I learned regular expressions, Awk and a bit of Perl,
although I never developed a taste for any them. But I got infected and I became very curious about programming. I wanted to do it well, very well. And so once again, like back in college, I
spent my time plunging through problems. I would work 16+ hour days, going home only to shower and get some quick sleep. I would
swallow programming books one after another and spend endless hours talking to people about code.

Back in Goldman I made a few good friends who stayed with me in my journey through the world of
programming. One of them in particular made a big impact on me. No matter what, he would always figure stuff out.
He was sharp, but more than that, he applied common sense. This was the tool that I lacked and he completely mastered.
Looking back now, I realize that he was the first master of patterns that I ever met. And even though consciously
I did not express it, the bug of patterns was planted inside of me. From then on I would be on an intense search for
patterns in programming, science and life.

D.E.Shaw & Co: C++ and sharks

After spending 2 years at Goldman I was feeling bored. Not that I had mastered programming – far from it.
I just felt that there was something better out there. Another friend of mine left the Fixed Income group to join a company called Juno, a spin off from
a high tech investment fund called D.E.Shaw & Co. David E. Shaw is a famous computer scientist in Columbia University.
In 1988 he started a high-tech hedge fund, focused on quantitative trading. Known for his exceptional intellect, he
was able to attract PhD graduates from the top schools in the country, top-notch Unix hackers and incredibly bright humanity
majors. Mr. Shaw created a culture of secrecy and eliteness – the firm was purposefully mysterious about its processes, strategies and plans.

To even land an interview in that place was hard, but to get a job was nearly impossible, because the interviewers asked
very difficult programming problems and math puzzles. In all honesty, I could not have passed the interview had certain questions been asked.
But serendipity and luck were on my side. They asked me questions that I could answer and they nodded when I passionately told them
about my work at Goldman. To my big surprise I got an offer and became employee #223. I had no idea what I had gotten myself into.

My new boss was one of the most incredible people I’ve ever met. He rarely slept and emitted ideas
with the speed of light. On my first day on the job he pulled me into the office and said that tomorrow morning
was a deadline for an important demo – sending orders to the exchange using the Java client. Java? This
was a hot new language that had just came out of Sun. I did not really know C++ let alone Java. In any case,
my first day on the job was the first of many “all-nighters”. The demo worked, but only thanks to another
engineer who actually got it to work. After the first day (and night) I had a hunch that this place was going to be fun.

D.E.Shaw was one of the most competitive environments ever created. Assembling an impressive
number of very smart people in one place has its pluses and minuses. Everyone competed
really hard. Fortunately for me, I was on the bottom of the food chain and one of the least knowledgeable
employees. It was a perfect learning environment – I was absorbing information like a sponge.

At D.E.Shaw I learned the intricacies of C++. It was not easy, but I had good teachers. A senior engineer,
who later become my boss and mentor, knew C++ really well because he’d previously built massive Power Grid simulations
for ConEdision. He was also the first person who explained the power of object-oriented programming to me.
I remember reading Effective C++ books and holding in my hands one of the first copies of the famous Design Patterns
books. I became a decent C++ programmer and I started to really understand some engineering principles, but I was still confused. C++ is a
complicated language, where you really need to know how to program. It was hard to separate the what from how and hard to see the bigger picture.

We were building an automated trading system that maintained a list of outstanding positions and made
decisions based on history and market conditions. The program would process quotes from the exchange
and write them to an object database called Object Store at the rate of 2,000 per second. It would apply
sophisticated rule-based decision making to decide whether to keep a position or trade something automatically.
The entire system was quite complex and took years to develop.

I felt like I lacked Computer Science fundamentals to understand all the details and so I enrolled into a Masters program at Courant Institute at NYU.
Unfortunately I was disappointed. Most of the classes were rather basic and outdated. Most professors where interested in research and found the
basics trivial. To compensate for lack of excitement at school, I started learning more and more on my own.

At the same time D.E.Shaw was not doing as well. The company quickly grew to over 1,000 people and lost
some of its talent to a rising internet startup – Amazon.com. As it turns out, the Third Market group where I worked
was previously headed by none other than Jeff Bezos. He left a few months before I joined and was actively recruiting
top talent to work for him in Seattle. Many D.E.Shaw alumnus became instrumental to Amazon’s success. A lot of people
that I liked to work with left and I felt that it was time for me to move on as well.

Thinkmap: Java, Networks and Simplicity

I joined an information visualization startup called Plumb Design that
was developing an innovative technology called Thinkmap. The idea behind Thinkmap was to
create an abstraction for visualizing and navigating any data set as a network. The system was architected
around several basic layers – data adapters took care of taking information and mapping it into
the nodes and edges of the network. The next layer was responsible for arranging the network in
space. The layer above, the most fascinating one, created motion by utilizing formulas from physics.
The final layer was the visual one: a projection from the animation space onto the flat screen.

Thinkmap was capable of visualizing a wide array of information. From a Thesaurus to Internet Movie Database,
from Musical Albums to Books and even Java code. It was during my time at Plumb that I realized that
everything in the world was about networks. I became fascinated with this field and soon discovered
a branch of science called Complexity.

The study of complex systems is the study of unifying themes
that exist between different scientific disciplines. Scientists discovered that things as diverse as grains of sand, economies,
ecologies, physical particles and galaxies obey common laws. All of these systems can be represented as a networks of information exchange,
where the next level of complexity arises naturally from the interplay of the nodes on the lower level.
What fascinated me was that by representing a complex system as a network, you can create a model that
helps you understand how the system behaves. I sensed that Complexity science was the most profound
thing I had ever encountered and that the universal patterns that I was seeking were explained by it.

While I spent all my free time reading about complexity, at work I was mastering Java. My new boss was
tough and demanding. He was the biggest perfectionist I ever met. Very creative and very smart, he could
write any piece of code faster, better and most importantly, simpler. During my time at Plumb, during each
encounter he would remind me that I need to make things simpler. It was both frustrating, because I was never
good enough, but it was also very educational. Without a doubt, that experience made me a stronger person,
preparing me for the future.

It was at Plumb Design that I really started to master programming. Some of the code that I’d written
there had the elegance and beauty that is so intrinsic to all good code. To model a system correctly, you needed to think of it in terms of interfaces. Each building block itself would be simple, but when you arranged them
together so that they fit, a new set of behaviors would arise. One day Complexity Science and Java programming converged.
I realized that code is just like complex systems: a bigger whole arises through the interplay of
its parts.

NYU: 5 years of Software Engineering for Undergrads

At the same time I was finishing up my masters at NYU. Once I was hanging around the department
and jokingly said that back in Ukraine I wanted to be a teacher. The department chair jumped on that and asked if I wanted to teach an undergraduate programming class. I figured that it could not be all that bad
and signed up to teach Introduction to Programming. To be honest, it was bad and it was hard. The kids had no idea
what programming was and did not really want to learn it either. Despite the fact that the class was not a big
success, the department asked me to do another one. I said that instead of Pascal I want to teach an advanced class in Java.

And so was born the Software Engineering in Java class, one of the biggest adventures of my career.
I taught this class 5 times and each time it was so much fun. It was really intense –
all of the best, most cutting edge stuff I knew, I shared with NYU CS seniors. We covered Java topics like
Exceptions, Reflection,
Threads, Sockets and RMI.
We learned how to persist JavaBeans in XML and how to do relational databases in Java.
We covered basic design patterns, unit testing, refactoring and other principles of agile software engineering.
But the best part of the class was that we had a semester long project that modeled a classic complex system – a pond environment where digital creatures fought for their survival.

The class won the award for outstanding teaching, but the biggest reward was the comments that
I got from students. They felt that unlike any other class that they took, this one was really preparing
them for their career. Many years after my graduation I returned a favor. Like Stephen Corbissero at Lehigh University,
at NYU I created a course that was based on pragmatic things that engineers do in the field, not some theoretical
ideas that never see the light outside of academia.

To this day, I get emails from my students thanking me for the class. It makes me both
proud and happy. But as much as they are grateful to me, I am thankful to them much more.
Because as you know, the best way to learn is to teach. It is teaching this class that
really made me into the software engineer that I am today.

Information Laboratory: Small Worlds and Large-Scale Software

In the summer of 2000 I became convinced that Complexity Science had many business applications.
On a whim I decided to start a company called Information Laboratory that would turn the insights of complexity science into
a piece of software. I envisioned a powerful library, a modeling toolkit, that would help people
understand the behavior of diverse complex systems – from software and business organizations to
power grids and traffic flows. At the heart of this library would be networks or mathematical graphs.
For each situation there would be components to adapt the information into the data layer. Once the
system was represented as a network, it would be analyzed using a set of graph algorithms.

Inspired by the insights in the recent paper by Cornell’s PhD student, Duncan Watts, we realized
that a lot can be said about the behavior of a system just by looking at its structure. As it turns out, there are not that many ways for nodes to be wired together.
Some nodes in a network look perfectly balanced – inputs are equal to outputs. But some are not and those are very interesting.
For example, there are nodes that have a lot of inputs and just a few outputs or the other way around. The question that
we wanted to answer was: What do these nodes mean in different systems? For example in power grid, a node with a lot of incoming connections
and just a few outputs implies a potential outage point. Looking at communication pathways in a company, a hub – the person
who receives and disseminates a lot of information – is a valuable employee. And in software, a component that
does not depend on any other but has a lot of dependents needs to be handled with care.

It is the software analysis that soon became our primary focus. We realized that analyzing software structure
is a powerful way of identifying, preventing and solving architectural problems. For example, in software the component that
would have a lot of dependencies would be vulnerable to changes. We called such components ‘breakable’ and considered it bad.
Another bad structure would be a hub, since it would have a lot of dependencies and dependents. But worst of all
would be something that we dubbed a ‘tangle’ – a set of components interdependent via multiple loops. The result of our
insights was a software architecture tool called Small Worlds.

The tool was written entirely in Java and featured sophisticated graph visualizations and algorithms. It worked by
reading Java class files of other software and constructed a gigantic network where all components and dependencies were
captured. The tool performed automatic structural analysis and identified problematic components – breakables, hubs and tangles.
The result of the analysis was a report and the architectural score of the entire system. In addition the tool offered
insights into the causes of the issues that it identified and aimed to help architects keep their large-scale systems clean.

IBM: Eclipse, Code Review and Rational Software Architect

In July 2003 IBM acquired Information Laboratory, aiming to roll Small Worlds into their product line.
Just a few months before that IBM had acquired Rational Software – the maker of popular software development and
modeling tools. Post acquisition I joined the software quality group as the Architect of Code Analysis tools.
Needless to say a switch from a tiny startup to the biggest software maker in the world was not easy. At first,
most of my time was consumed figuring out how things worked and how to make anything happen. The original plan was to keep Small Worlds as a standalone product, but soon it was clear that it wasn’t to be.
IBM was planning the roll out of the next generation of its programming tools: Rational Developer
and Rational Architect, both based on their open source offering called Eclipse.
So IBM renamed Small Worlds to Structural Analysis for Java (SA4J) and made if freely available via its alpha works program.

The next challenge was to rebuild the tool so that it fit into IBM’s product line and marketing plans. As the result,
it was split into two pieces – one ended up being part of the Rational Architect offering as Structural Patterns.
The second piece, called Code Review, is something that we built entirely from scratch. While SmallWorlds was focused on architectural
problems, Code Review found a range of issues from security violations to redundant code to logical flaws. It also offered automatic refactoring
that with a touch of a button helped developers fix their code.

Learning Eclipse API was a not a lot of fun, and getting the product done with a small team in a matter of 9 months
was really a ‘mission impossible’. We had to balance internal politics with the pressure of the schedule and inability of management
to make up their mind. Remarkably, our team succeeded, largely because we focused on code more than politics.
Code Review was ready on time and was shipped in the first version of Rational Developer.

But the entire experience was disappointing. I realized that at the end of the day
it was not about building quality tools or doing the right thing. Political and slow, the software
quality group also was known for its inability to build quality software on time. I felt that this was
too hypocritical for me to stick around.

Data Synapse: Virtualization

I left IBM to become chief architect of Data Synapse, the grid computing company based in New York.
Briefly in 2000, during my first month as the founder of Information Laboratory, I helped Data Synapse
with their original grid server infrastructure. Five years later when I got a call from a founder to join
full-time, I was intrigued. Data Synapse aimed to build its second product, an on demand virtualization infrastructure
for J2EE. The idea was to enable dynamic provisioning of application servers to meet the changing demands of
an enterprise throughout a day. In a way, this was a more sophisticated precursor of what EC2 is today.
And I just could not resist this project.

Without a doubt, this was the most challenging piece of software I ever dealt with. Its core was
a sophisticated scheduling algorithm that orchestrated a grid with thousands of servers. Each server was
provisioned with bundles containing a stripped down version of Apache, Tomcat, WebLogic, JBoss, WebSphere and many
other grid containers. Each application would be deployed to the central broker and then distributed to each node
on the grid. As an input, the broker would get a schedule indicating when each application needed to run. Each
grid application included a set of agents that monitored its characteristics – such as throughput,
memory load, disk usage, etc. Based on the current state of the grid and target performance rules, the broker
would decide how to allocate the limited resources.

The result of many months of work was the first version of DataSynapse’s FabricServer.
As soon as the product was released it was piloted at major banks – Wachovia and Bank of America. Financial institutions were
always on the cutting edge of grid computing, because of their need for massively parallel risk computations. And when the J2EE virtualization
became available, the banks were first in line to give it a try. Running Fabric Server in a real environment proved
to be yet another challenge. In the early days we would constantly uncover stuff in the field that we would not have thought
of back in the office. But as time went by, the product worked as expected in more and more situations. This system of enormous complexity really did work.

AdaptiveBlue: JavaScript, Mozilla and Amazon Web Services

In February 2006 I founded my second company – AdaptiveBlue. While Information Laboratory was all about
structure, AdaptiveBlue is focused on what can be done with semantics. Fascinated with the ideas of the Semantic Web and smart browsing,
I dived into the world of new web technologies. Switching to JavaScript and
Mozilla platform was not easy, but through the years I have learned to adapt and embrace new technologies.

Today our software is a mix of JavaScript, Mozilla XPCOM and XUL on the front end. The back end has some PHP scripting,
but mostly it is written in Java. Both back end and front end share the same XML infrastructure allowing us to make easy changes
and extensions to the system. To scale to hundreds of thousands of users, we chose to architect our software around Amazon Web Services –
the most reliable web-scale infrastructure available today. We also heavily use available libraries and try to not re-invent the wheel.
In short, we focus on the application itself and on the user experience. The technology is just the means to enable our business.

ReadWriteWeb: The Reflections

If you’ve reached this sentence, you must have realized that I consider myself exceptionally fortunate.
I’ve had so many different experiences, learned from so many bright people, built amazing software,
discovered the power of complex systems and had a lot of great students. In the last 17 years I’ve truly lived my American Dream.
Of course my character, determination and passion are also responsible for my life. Yet, without the opportunities
that I’ve had, none of what I’ve done would’ve been possible. America, in my mind, is all about the opportunities.

The latest opportunity that I was given was to be a contributor to this wonderful blog, ReadWriteWeb. Being able to
cover technical trends, to share my views and most importantly to learn from all of our readers is a true privilege.
I am grateful to Richard, the writers and to all of you for this unique experience. I hope that my journey
so far has been both interesting and inspirational for you. Here is to the American Dream and endless possibilities.

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.