I haven't used it, I didn't know it existed until now, but I'm happy it exists and has been providing service to those who need it. There should be more of this.
Thanks, most of these came out restriction, we cant afford to throw money on horizontal scaling (adding more server,load server etc). So we kind of forced to try out new things to keep cost affordable. There are many thing left out on above doc: IIRC, we started with openvz and even today our security relies on SELinux, how we remapped user account creation with pre-existing templates for ext4 quota, we moved to xfs because of flexibility. Mysqldb quota/limits, fork bombs by college/school students bringing out docker environment. Old school internet is right term.
That's wonderful and I know why it's an Indian founder. Was so hard to get a remote shell back then. Indian debit cards didn't work online reliably and so on. So what's the hardware underneath? Cloud server or on-prem?
These days the world is amazing. Oracle Cloud gives you a ton for free. But perhaps there's some niche where this is useful. I have to say that this shared screen comms system is outrageously crazy, hahaha.
It began as on-prem, Freston hosted in his house (we shared server cost, some people called it crazy, because I sent money to someone I met in Linuxforums.org and never seen this person, even via internet, I trusted him because I know him for few years on that forum) After 3 years or so we moved on to cloud servers. Mostly switching from one infra and another if we get some credits :D Couple of years we had Linode sponsoring those nodes until its acquisition.
>shared screen comms system is outrageously crazy,
Thats Freston idea. I remember our typically chat begins with something like
"Hey Laks, Can you see me typing!" ;)
To be fair. 8GB of ram is huge. I don't know, maybe I'm stuck in the early 00s but even 2 GB of ram still seems extravagant; I remember when that was an exotic amount of RAM for dedicated gamers to play extremely high fidelity games, so for a mere web server 8 GB of ram almost seems like absurd overkill. I still feel a tinge of shame whenever I see any software of my own using more than a few hundred megabytes. What a waste.
The major difference, here, is this is intended for multiple users (not one person). Imaging 5,000 users all using the device at the same time. The amount of memory, open file handles, network connections, etc. for many users at once adds up.
Depends entirely on what you're doing. 8GB of RAM is very insufficient for 3D texturing workflows, for example, where you can have many different 4k textures cached in memory. For other things, 8GB is probably a lot.
Quite often clients were more powerful than servers. Hell, at one point a CPU embedded into a printer could be faster than, say, 8088. An X server (running on the client side) often required a more powerful machine than one running X clients (i.e. a server). A web browser is not an exception.
I barely used or remember the ZX-81 my folks had with it's amazing 1KB of memory. It had a 16K expansion module you could plug into the back, which apparently made a big difference, but also didn't have the greatest connection. You could easily dislodge it typing on the keyboard. I do remember my father coming up with various ways to try to secure it.
The ZX Spectrum that followed, with its huge 48K of RAM was night and day. The programs were so much more complicated.
Even echo on linux these days takes 38K of disk space and a baseline of 13K of memory to execute, before whatever is required to hold the message you're repeating.
RAM was so tight on those 8-bit machines that many games used tricks like hiding things inside the viewable area of the screen to eck out just a little bit more.
Well... The best days were just putting hardware in a 2U box, racking it, and paying a bit for power and networking. This was such an easier time, and a handful of core 2 duos were fully capable of streaming 1080p video to around a million daus.
Of course, there's far more money in really fancy shared hosting that wastes resources, so that's the current model. Then you market to C-level folks that "real companies" host on AWS or Azure, and that all others options are "unserious." If your opex for compute isn't a million, you're wrong.
Sure. We had quite a lot of universities and schools used this platform for their classes. I'll be away from system for next 48hrs though.Drop us a mail,will respond.
It takes a lot of guts to run something like this for years on end, kudos to you for setting this up and running it for all these years. I am wondering if you'd ever come across pubnixes or tilde servers when you first started up webminal?
Actually I opened up GitHub Sponsor just few weeks ago. Few tims i received enquiry from users (professors) who wanted to contribute back.only now i have proper channel to redirect such requests.
In past I have seen around 10 process, but I think with current setup, it could support around upto 20 UML. Remember this runs on the same server where others login and get their normal bash account too. So not a dedicated UML server.
I really like the ease of use of the site. It's also very clean. However, when you go into the Linux, there is a bit of latency (very noticeable). I know that it's impossible to remove the latency completely (it is what it is), but is there a way to slightly reduce it?
There will be little latency if you access from different region. Server located at Singapore. From India, I checked right now directly via this link https://www.webminal.org/terminal/proxy/index/ I dont see much issue. I use firefox/chrome on Debian. May be try with different browser?
Only UML is the resource consuming part kept as option available on request. Rest of them all shared Shellinabox, nginx,Flask and each active user session consumes little RAM since its a shared terminal. Simple `ls /home` shows all other users on that server!
Oh man, what a blast from the past. I have fond memories of learning linux networking with netkit (based on UML).
UML was a really really cool piece of technology.
If anybody is wondering, User Mode Linux lets you boot a Linux kernel as a normal linux process, and then run an userspace, still in a linux process. This is from 2001. Super cool.
I was trying to remember what this was called the other day, for some reason.
It turns out that if you run a uml kernel and point its root at the root of the disk the host Linux is running on, there's a hell of a turf war between the two and no-one wins.
Great work giis.
I haven't used it, I didn't know it existed until now, but I'm happy it exists and has been providing service to those who need it. There should be more of this.
reply