Why did 'the web' beat 'distributed objects'?


(Andy Wootton) #1

I was AWOL in the years where http became a delivery mechanism for applications but when I finally noticed I was very surprised.

What made it seem like a good idea to run transactional software over a connectionless protocol designed for static hyper-text over unreliable connections? Aren’t half the problems in webdev caused by taking this shortcut and isn’t it time someone designed a protocol that is appropriate for the job?

[Runs back to ‘Cave of Web Ignorance’ while people explain why I’m wrong]

For comparison: when I was working with developers in the mid-90s, OO was happening and people were moving away from proprietary networking solutions like DECnet to TCP/IP and architectures like DCE (Distributed Computing Environment) and Object Request Brokers like CORBA, which seemed a very good idea.

Does anyone know where that went wrong? Was it just that DCE wasn’t adopted by the VB programmers on Windows because MIcrosoft bribed them with something easier, to keep them Microsoft-only or were distributed objects actually a bad idea? Is there anything current that works this way?

I asked WikiP too:

"The rise of the Internet, Java and web services stole much of DCE’s mindshare through the mid-to-late 1990s, and competing systems such as CORBA muddied the waters as well.

One of the major uses of DCE today is Microsoft’s DCOM and ODBC systems"

Typical! Open Systems competed themselves out of existence and Microsoft stole the ideas.


(Richard Cunningham) #2

If I understand you correctly, the reason is the desire for stateless protocols, which mean the server can operate with much less memory per client (none essentially), scale to multiple servers and handle client and server restarts easily.


(Jim Gumbley) #3

Wow DCOM, that’s a bad memory.


(Andy Wootton) #4

Thanks, I think I was using an X terminal with 10MB, connected to a MicroVAX with maybe 16MB of memory shared between a room-full of people, so is this still an issue?

I’m thinking of something running on my phone that is somewhere between an X session and a browser session that sends and receives messages, then replace the web server with a broker to find appropriate remote objects when I need them. Is that very different in terms of overhead?


(Richard Cunningham) #5

Yes, because the objects get bigger, as the computers do, and servers can’t have 1000s as much as memory as the clients. In fact my Digital Ocean server has a 1/16 of the memory of my laptop and has to deal with potentially 1000s of visitors. Even when a server stores small amounts of state (e.g. SYN cookies, connection tracking) this can overrun memory allocated to it, especially in DOS attack.

Remote desktops are different issue, but realistically each full client needs 1-2GB of server memory, so even a 128GB server gets full quickly. Also X is a rubbish protocol for use over slow/lossying connections, because it has to transfer everything even if it ends up being really late - it can’t drop frames from a video or animation, you just have to view them in slow motion.

In your webserver example, it has to state of those objects then that could be 10MB just for you, then that’s only 50 clients on 512MB RAM server. Where as stateless you can deal with 10s of thousands of clients, because you don’t have to remember stuff about them.

Clients almost always have more compute power that the fraction of the server power that’s intended for them.


(Andy Wootton) #6

I didn’t mean X in reality, just something that draws screens, like a remote desktop.

Though X doesn’t deserve it’s bad reputation for network load. When I was at Jaguar, they were running many Unix graphics workstations on 10MB hubs, giving maybe 2MB per workstation and there are protocols that can compress X now.


(Richard Cunningham) #7

Yeah, like VNC. We have a server people do that on, each person is using like 1-2GB often with little activity and no one else can use the memory in that time - often they just abandon the session, leaving someone else to kill it.


(Andy Wootton) #8

Yeah but VNC is rubbish isn’t it? :slight_smile:

I must reluctantly admit that MIcrosoft did a much better job on RDP.


(Richard Cunningham) #9

It’s not particularly the fault of VNC, it’s firefox/chrome/matlab/emacs whatever that’s using the memory.


(Andy Wootton) #10

But that’s your ‘user session’ isn’t it? That could be local on your 16GB phone, where memory is cheap. The protocol could just be handing over the bits of the file you were working on and only sending back the changes (maybe.) The remote object would be a few bytes of where you are and a pointer into the file, to hand to a file system object, which might be on a different box.


(Richard Cunningham) #11

Why would the server need a pointer into the file? why not just keep all the state at the client?


(Andy Wootton) #12

I was thinking of encapsulation, without actually having a clue how this is going to work because I’m making it up on the fly. But it seems to me that I know where I am in virtual file but shouldn’t need to care how it is stored at the other end. The far end needs to know what is open so it can handle other requests.