This entry is boring. At least Allie will think so. Others also.

The other night I studied in Stern Dining. It brought back some memories. But more than anything it made me feel really old. Like, it just stuns me how much older I am than the other people there, you know? The people living in the dorms.

It’s also crazy to me how much Stanford has changed just since I’ve been there. All these buildings didn’t exist. CIV is gone. Is the dagger still around? Bio 30 series – gone. Physics 50 series – gone. Remember when Stern used to be points? Or when the CS department was in Tresidder? Anyway, I saw this flyer announcing possible changes in the meal plan. I’m really bitter because they make total sense and I don’t know why they had to wait until I was gone to do it. This includes having meals per quarter, rather than per week, letting you use extra meals for guests, and having a points equivalent for meals that you can use at Late Night and such. Seriously, that just makes so much more sense.

So don’t ask me why, but I’ve been thinking a lot lately. Just about life. I made this comment at Horizons that I really believe. But like, I really think that when people graduate from college, a lot of them, to stave off loneliness, just seek to do things all the time, be with people all the time, or divert themselves all the time. What happens is that their primary activity, or focus, is just entertainment, diversion. There’s no sense of purpose or calling. I’m overstating things, but I don’t know, that’s just what I think. That most people are primarily seeking to entertain themselves.

It just depresses me, because when you live like that, as Henry said, you go about your life and then one day find that you’re 40. I don’t want to do that. I want to live deliberately, and with purpose. Not just saying I have a purpose, but actually living with purpose, a qualitative difference.

The thing is, I don’t know how much I’m doing that right now. I don’t feel like I’m being heavily involved with something purposeful. I’m just kind of riding the wave, just going along. I was gonna write more about this, but whatever.

So as you know, Microsoft was ruled guilty of violating antitrust law. I don’t know how you feel about it, but I’m very happy. If you hear about the things Microsoft has done to screw companies, it’s just jacked up.

The thing is, nowadays I know people who work for Microsoft, and they are invariably good people. So I separate the employees from the upper management. But the people who run it have just done some nasty things.

Anyway, I read this article that was really interesting. The main point, of course, is whether Microsoft stifled competition and innovation. This columnist argued that they clearly have. To realize this, one simply has to think about the software you use – have there really been any innovations in software since Microsoft gained extreme dominance in desktop software? I’m talking since Windows and Office became pretty much standard issue a few years back.

I don’t know, this was just thought provoking to me. Really, since Microsoft took over operating systems, and word processing, and presentation software, there hasn’t really been much innovation. In stark contrast (in my opinion) to the ways of yore. In fact, the only big areas in which there’s been incredible software innovation in the past few years have all been related to the Web, where Microsoft doesn’t have a stranglehold.

Anyway, computer job fairs are interesting nowadays, because traditional software companies are becoming fewer and further between. You know, there’s places like Adobe, and others that have their niche. But by far, everyone wants Web developers. That’s where the innovation is taking place, and I really think it’s because Microsoft doesn’t dominate there (yet).

So I don’t care what Microsoft says; in my mind, it’s clear that they stifle innovation wherever they go. And that is a bad thing. I really think, for the benefit of consumers, Microsoft needs to be stopped at all costs. Seriously, if you just engage in careful thought, it’s obvious that they prevent software innovation. It’s almost a direct relationship – where Microsoft has the least influence, the most innovation occurs. At least in the past few years.

Argh, I’m getting all worked up about it. But I’m going to make a bold claim. That claim is, Microsoft hasn’t done a single innovative thing in the last 5 years. What they do is take other people’s innovations, improve on it a little bit, use their vast influence and resources to drive the innovators off the market, and then never innovate again.

Whenever they have competitors, Microsoft’s innovation is amazing. Like the development of Internet Explorer. Or word processing when they went against Wordperfect. Or presentation tools versus Harvard Graphics. It’s impressive. But I really think once they drive their competitors out, innovation stagnates.

I’m foaming at the mouth. I should move on.

It amazes me how much I know. Don’t take this the wrong way. I’m not saying I know more than anyone else. All I’m saying is that I know a lot about particular subjects, like Artificial Intelligence (AI), and that amazes me. Dave has written stuff to the same effect before, how it’s amazing how everyone, by the time they leave Stanford, have just an incredible amount of knowledge about particular fields, unless they were econ majors.

I was struck by this as I was listening to a talk by this guy who does AI work. He touched upon lots of different parts of AI, and even gave some criticisms of differing views. I’ve reached a point in my AI education where I knew at least superficially everything he was talking about, and when he would criticize views, I knew what those views were and who espouse them. Seriously, it just blows me away how much I know about Artificial Intelligence.

And honestly, it seems kind of like a waste to not use all that knowledge. Like, I learned it all and I’m going to end up just being some scrub programmer? I don’t know. At the same time, it’s clear I need a break from school. My motivation level has reached a critical low, and that signals something to me.

Anyway, I was musing about some things about Artificial Intelligence; it might be boring to you but I think it’s interesting.

So two of the classes I’ve recently taken or are taking is Computer Vision and Natural Language Processing. In a way, these two fields deal with getting computers to see, and getting computers to understand, be it audible words or written text. So it’s getting computers to see and hear intelligently.

The crazy thing is how difficult both problems are. Once you learn about it, it just seems like a hopeless problem. Computer vision is essentially a bunch of math and tons and tons of calculations. Me and Eric did this project dealing with small image sequences (like maybe 13 images), not very big, and this took quite a bit of processor time. It’s just a lot of calculation. Natural Language Processing (NLP) is similar, in a way – it just involves a lot of statistical data crunching and is processor intensive. And even with all this, the results often leave much to be desired.

To me, this is a reminder of how wonderful and amazing the human body is. There’s no question that in certain respects, the processing power of modern computers far exceeds that of humans. There’s no question about that. The amazing thing is how it is just so difficult to get computers to do certain things that humans do effortlessly – namely seeing and hearing, and being able to recognize what we’re seeing and hearing. For computers to do that is an incredibly complex (and some say hopeless) problem. Humans, with our extremely limited computing capability, can do these incredible things. It’s a testament to how well we’re put together. The way we compute things is seriously amazing; far too complex for machines to mimic. At least right now.

There’s another interesting thing about AI and many subfields of AI, and that’s how the perception of what it should be and do has changed. In the beginning, AI and related fields were all about modeling the way humans worked. Like, humans reason symbolically, so it was important that artificially intelligent computers also reason symbolically. In NLP, it was important that systems parse sentences and words like humans would. Stuff like that. The goal was to make a machine think like a human.

What ended up happening is they ditched all that, for the most part, and now only really care about what works. For example, a big field in AI is probabilistic reasoning. Humans don’t think in numbers and probabilties and stuff like that, but people don’t care, as long as the results are good. Similarly, in NLP, it’s moved from parsing like humans do to doing whatever works, and that has been more a statistical approach. Again, this is something humans never do. At any rate, it’s interesting to me.

Leave a Reply

Your email address will not be published. Required fields are marked *