Monday, December 20, 2010

Google Chrome OS Tentatively Released

Google's Chrome OS has been tentatively released in a "pilot program" for testers who will receive a Cr-48 Chrome Notebook. To receive the notebook, one must apply to beta--err pilot test the forthcoming operating system.

Google has apparently already signed deals with Acer and Samsung to release laptops in 2011 with Chrome OS pre-installed, and I believe that the Cr-48 will be commercially available as well. That will more than likely be the Cr-49 or 50, after the testing has been completed, but you get the idea.

So as you probably know, Chrome OS is a cloud-based OS. in other words, it's really no more than a self-contained Internet browser. It's designed purely for people that use their computers exclusively for internet access. Any applications that need to be run will be cloud-based apps accessed through the browser.

Ignoring my previously stated skepticism of cloud-computing and Google's cloud-based apps, my question here is will there be a market for this OS and more specifically, a market for the coming Chrome OS notebooks? Because the OS is a Linux distro, will it have any impact on Linux and the Linux community?

Well first of all, I don't see it being widely adopted as a desktop OS. It's not designed for that. There are many cloud-based OSes out there already and they are not being used on the desktop. Again, that is not the design of the software. That leaves it for laptops and netbooks. But I think this is a mistake. I don't see netbooks being around for much longer and laptops need a complete operating system, or they're just a waste of space and money. If the OS was redesigned for a tablet that would be something else. Cloud computing on a tablet I can understand. A tablet doesn't need any more power than to make some quick adjustments to a document while on the go, or a quick status update to friends or an employer as you head across town to a meeting (and maybe a video or two on a flight).

That being said, I don't think that there are enough cloud apps available to justify the OS operating completely through a web browser. And the apps that are available are too limited for any kind of professional work. As an illustration, lets take a look at the Chrome Web Store. If you are using the Chrome or Chromium web browser, you can try this out for yourself.

I won't go into the pros and cons of the store itself (there is a lot I could say to criticize it); I'll just talk about the apps available. In the productivity menu (we're assuming professional use here) let's look for an image editing application. Right now a featured app is the "Advanced Image Editor by Aviary". After selecting it, we are prompted to install it. The "installation" brings us to a web page: http://www.aviary.com/online/image-editor?lang=en#&src=chromeos. I am truly surprised at the features it has. There are several filters and tools available, and you can even work in layers. That is very good for a web application. But as a professional solution? It will never replace GIMP's functionality and is light years behind Photoshop. I'm really not even sure if it's decent for a hobbyist. Why use this when GIMP is free? For those who want only to make quick edits for uploading to Facebook, fine, but in no way is this a professional solution.

The same website (Aviary.com) has another link from the app store for music creation. And again I'm surprised at the sophistication in a web application. This one had some minor issues (drag and dropping instruments into tracks was extremely sluggish), but on the whole I have to say nicely done. There is a wide variety of instruments to choose from and supports importing your own sounds. But again, this is not a professional solution. LMMS, Ardour, and others are completely free and and have way more features.

I'd like to look at one more app. This time I'll move away from the productivity category and move to entertainment. Let's try one of the games.

The first game that caught my eye was Runescape (it was featured). I've never really been into MMOs so I wouldn't really be able to give an unbiased opinion of it, but I would love to hear from anyone who does play that type of game. The graphics impressed me only in the fact that it was played through a browser. In comparison to other games it only would be impressive if you played it 6 or 7 years ago.

I did actually try Quake Live which was also in the store. However, in spite of the fact that it is available in the store, it does not work with Chrome. This app only works with IE 7+ or Firefox 2.0+. I'm assuming that since it is in the store that it will soon support Chrome (and thus Chromium) so I gave it a go.

The graphics quality of Quake Live lies somewhere between Quake 2 and Quake Arena. It plays very well, and the gameplay is very reminiscent of of Quake 3 Arena or Unreal Tournament.

On top of the free to play model, there are 2 subscription-based options for playing, the Premium Membership for $1.99/month and the Pro Membership for $3.99/month. The monthly fees are not worth it in my opinion. A game like this for purchase is worth at most $10.00 (being so dated) so I can't imagine paying a monthly fee for this. The breakdown of the different subscriptions can be found here.

To go back to my earlier statement about Chrome OS being better suited for a tablet than a notebook, I'll grant that a FPS would be more difficult on a touchscreen. But at the moment it's not even compatible with the OS so I don't know what to say here.

So, being in a beta still, it's hard to say if this will be a success or not. My thought is that it doesn't deserve to be as it stands, but it does have the Google name behind it so who can really say? Perhaps if they go the sane route and at least put in a virtual keyboard for those who want to install it on a tablet I might be a little more optimistic.

There is still the big elephant in the room however. I'm referring to Google's habit of collecting data. Now this is open source, so if Google is collecting data right from the OS itself, we will soon know about it. But since all usage of this OS centers around your data being kept on a server "somewhere out there" I just cannot get behind it. There is no source code to look at from the web-based apps, and you can be sure that the user is not the only one that has access to the files stored in the cloud. But in this case my personal feelings are irrelevant.

Let's pretend for a minute that this really takes off and overtakes iOS as a popular OS on the Internet. Will this help legitimize Linux for the masses as a viable choice on their PCs? Again I can't help but be a pessimist. Android OS is currently more prolific than iOS for smartphones. Yet most people using Android don't even know they are using Linux. I feel that will be the same here. Google isn't going to go out of their way to tell people where the OS really comes from, or to inform people about free or open source software. They will push their brand name and tell people they are using Google. That's it.

And that's honestly a good thing. Google's business practice contradicts what free and open source is all about and I really wouldn't want the two being associated together. Linux will keep chugging along at its own pace. With the improvements and growth Linux has made in the last few years, it doesn't need a name like Google behind it anyway. Linux is doing perfectly well on its own.

Sunday, December 19, 2010

Multi-Monitor Support Forces More Use from Windows Partition

I've been struggling with multi-monitor support for a while now. I'm still not 100% satisfied, but I think I've got the best solution possible for my circumstances. I should probably share my system specs before I go any further..

CPU: Intel Core 2 Duo E8400 3 GHz
2 Nvidia GeForce 9800 512MB video cards
4 GB DDR2 RAM

This was what I wanted: SLI configuration, compiz effects, both monitors spanning 1 desktop (to drag windows from one monitor to the other), separate virtual workspaces for each monitor including different wallpaper.

The end result of much effort and several attempts is:

SLI simply does not work with a multi-monitor configuration at all. At least in any configuration I tried the 2nd monitor simply would not start in SLI. So if I wanted to use both cards, Twin View is not an option.

Separate X screen without Xinerama works but it was unusable for me as it caused some really strange things to happen. Nautilus windows just started popping up in the taskmanager and would not stop launching them until the system crashed. Until the crash, the two monitors worked however and compiz was enabled as well. With the separate X screens though, dragging windows from one screen to another was of course not possible.

I tried enabling Xinerama with the separate X screens, and this worked fine but for two things:

  1. No compiz effects. This was disappointing, but I could live with it.
  2. No KDE apps. Indeed, trying to start up any KDE application would cause the entire desktop environment to crash to the login screen. If I tried to log into the KDE desktop environment rather than Gnome, it would load and crash, again leaving me at the login screen. This I cannot live with as there are a few KDE programs that I just cannot live without.

That left with me with Twin View, which does work very well, except for 2 things:

  1. Separate virtual workspaces for each monitor are not possible. Xorg sees only 1 desktop , which means that each monitor must use the same wallpaper and switching virtual workspaces will cause both monitors to make the switch. It cannot be done independently.
  2. My second video card is now not being used. Now I would like to point out that SLI never worked very well anyway. Or I should say that it never worked very well when compiz was enabled. I actually switched to KDE briefly for this reason (I could have the plasma desktop effects enabled and SLI worked fine). I'm now really regretting going the multi-GPU route when I had this machine built. I wish I had just purchased one 1024MB card, but c'est la vie.

This current setup suits me fine for the most part. For graphically-intensive gaming however, I will have to rely on my Windows partition a little more than I do currently which is a real shame. I would like to point out that I believe blame can be laid squarely on Nvidia for lousy SLI support in Linux. In Windows I can have a multi-monitor setup with SLI enabled with no problems whatsoever. I know this has to be possible in Linux as well, but Nvidia just can't be bothered working it into the drivers. And they refuse to release the source code so that others may do the job for them.

I'll admit that compiz is not innocent here. But even with compiz disabled there is some discernible stuttering in the display when SLI is enabled.

This brings me to another point. when something quirky happens in Linux, it doesn't mean that Linux is quirky. It means that some software running is having some undesired results. To put this in perspective, lets look at some software that runs on Windows that has less than desired effects. Perhaps the most common that I see is umpteen-thousand toolbars that people for whatever reason install in Internet Explorer. It has the result of not only slowing down the browser, but with all the spyware that inevitably comes with it, the whole system slows to a crawl (and why the hell are you using IE anyway?)

I could bring up more examples obviously, but you get the point. In either of these cases, the OS is not to blame. It is either the software being run, or a configuration problem (well it's both in each case. The Windows example would include the configuration issue because if a guest account was used instead of the default administrator account, the spyware wouldn't be permitted to install).

But I digress. I didn't really mean to turn this into a Linux/Windows comparison. This was only meant to be a more personal post of my own experience.

If you have any experience with the issue I've described, please comment and if you have a solution I may have overlooked, please share it.

Wednesday, December 15, 2010

Implications of the WikiLeaks Scandals

I've been following the story of Julian Assange's arrest and of course the US government reaction to the cable leaks. I wasn't going to bother with a post about this, but the more I read, the more it seems to occupy my mind. It's going past the typical "amusing antics of the US government" and delving into Orwellian territory.

Now this goes beyond the concept of respectable or responsible journalism. Whether or not WikiLeaks falls into the above categories or is simply out to embarrass governments in the same way that tabloids seek to embarrass celebrities and other public figures is not the issue. What is at issue is the freedom to distribute information. In Western culture are we not free to discuss and inform anything we feel has importance, or might be important to other individuals? It's becoming apparent that this is not so.

We have high profile public figures calling for Assange's assassination. Large media outlets such as the New York Times are being blocked for publishing a few of the the leaked cables. The "rape" charges against Assange seem more and more like an attack on his character in hopes to discredit him rather than any kind of serious allegation. (Here is a detailed account of his crime and arrest. This is a very interesting assessment of the charges by acclaimed author and feminist Naomi Wolf.)

This whole situation is getting out of hand. Is this the death of real journalism? Depending on what is being reported, it seems that journalists soon may find themselves in a situation where their profession necessitates operating outside the law. I'm sure that Julian Assange would say that this situation is already here. It's exasperating.

I find myself wondering how Carl Bernstein and Bob Woodward (the Pulitzer Prize-winning journalists who uncovered the Nixon-era Watergate scandal) would have fared in today's political climate.

UPDATE:
This is a video from October 22 discussing the (then upcoming) Iraq War Docs leak that brings some perspective. Daniel Ellsberg, the famous whistle-blower of the Vietnam War in 1971, speaks. Part one of two.

Tuesday, December 7, 2010

The Year of Linux (Take 19)

The general consensus (according to w3counter.com, which gets its data from web-usage only) the current Linux market share for the Linux OS is approximately 1.5% (as of October 2010). Yes there are other stats that give different numbers, but this seems like an approximate average. Some have Linux's share at below 1% and more than a couple have it at above 5%. The specific numbers don't even matter to me at this point. I only think that they should be better.

Now of course I've also heard the arguments that globally the percentage could be as high as 40%, taking into consideration 3 points:

  1. Countries outside of Europe and North America are not considered in most if not all data collecting websites.
  2. The population of China and India together roughly doubles the population of Canada, the United States and all of Europe combined.
  3. The governments of both China and India actively promote the use of GNU/Linux over Microsoft and Apple OSes.

I only point to this as a way to discourage the mindless Linux hate coming (mostly) from Microsoft users. I fail to understand where this hate comes from, especially considering that it mostly comes from people who haven't given Linux a chance at all, or from those who tried it once over a weekend. But I digress.

My real question here doesn't have anything to do with Linux popularity in various Asian countries. My question is what would it take for the market share in Linux to rise above, say, Mac OSX's desktop market share?

I think I have an answer, but it will take a bit of explanation.

Linux is the No-Name Brand OS

At least to Windows and Mac users it is. Most people consider Linux to be the cheaper and less-usable alternative to the "name-brand" Windows and Mac. This, of course, couldn't be farther from the truth but the only way to change people's minds is to actually get them to use it for more than a day. And I don't think that this can happen without a cultural change.

Consider the time-frame that Microsoft came to dominate the market.

In the early '90s, Microsoft was still the young, upstart software company that was starting to do battle with the big, evil corporate Apple. Windows 3.0 was released in May of 1990. (3.11, the big one before Windows 95, was released in 1993).

Anyone remember what else was going on at the time? I mean outside of the computer world. Culturally speaking. Anyone?

There was Seattle, the whole "Grunge Movement", and everyone in any advertising agency freaking out because no one was listening to them. Fashion magazines had no idea what to do and were clothing models in plaid and knee high combat boots, selling outfits for $50 and less. No-name brand products, for the first time, were outselling name-brand products. People were about function, not style. And people loved to support the underdog. Western culture, in these three or four years of the early '90s, was saturated with these ideologies.

People also started buying computers a lot more than they had ever before. Technology started to be more in the front of people's minds in the mid '90's. I remember "multimedia" as the buzzword, but that was quickly replaced as the media started to become aware of this mysterious thing that hackers were now doing: "surfing the web". And the Internet began to be seen as a re-emergence of the Wild West, at least that is how it was portrayed.

Enter Windows 95, the latest offering from that young upstart software company that ran perfectly well on an IBM clone -- the no-name brand computer. The timing was really perfect. Microsoft was well on its way to dominance, but already the function-not-style mindset was dissipating. Grunge was dead. And musical tastes were changing too. "Electronica" was now becoming a movement. Raves became popular again. And this edgy sounding music was all technology-based. Kids could make this sort of music in their basements if their parents had bought them a computer at some point.

Windows 98 was released in June of 1998. Advertising agencies were breathing a sigh of relief, and starting to relax a bit. The recession was over, and the US budget was balanced. People had disposable incomes again. The depressing Music of the early '90s was gone. The Spice Girls, The Backstreet Boys and Brittney Spears were at the top of the charts. The Smashing Pumpkins disbanded in disgust. And Microsoft was king. By the time XP was released in 2001, America was thoroughly back in the name-brand, style before function mindset. As long as something looked pretty, it didn't matter if the core was rotten. If it was expensive, it must be good.

Microsoft got really lucky. Windows had matured in a way that completely fit into the rest of America's culture. Don't get me wrong, Gate's business sense had much to do with Microsoft's success too, I'm just saying that the timing of it all expedited its popularity.

Considering this, where are we left? Well Microsoft is still undeniably king. No longer are they "the young, upstart software company that was starting to do battle with the big, evil corporate Apple". They are now the big evil corporation that people are going to Apple to escape from. (Fill this space with any ironic comment you wish).

But we are in a recession now. Why are people paying even more for a Mac, rather than switching to Linux which is cost-effective and ultimately better than both? I would say the reason is that the culture didn't change with the economic situation like it usually does. The reasons for that are many and varied. Not the least of which is the anti-sharing media campaign put forth by the RIAA, MPAA and other groups. (And hey, telling people not to share is telling people not to use free or open-source software).

So to finally bring this around and include the reason for the title, will the year of Linux ever come? Probably. But not any time soon. If 9/11, two wars, the worst recession in decades and Michael Moore can't seem to make a significant cultural change, then I just fail to see how it's going to happen. And without it, I can't see people abandoning their OS of choice en masse.

Unless of course it's Linux that actually causes this cultural shift I'm waiting for... hmmm...

Friday, November 26, 2010

Richard Stallman is Right

I've been "dabbling" in Linux for a number of years now, but I've really only given it a serious try in the last few of months. During that time I've become very interested in the Open Source and Free Software philosophies. I've watched documentaries on the history of Linux and the FSF, and videos of Richard Stallman's lectures.

Let me say first that in the history videos I've seen, Stallman really does come off as the freedom fighter that he seems to see himself. I really admire what he's done, particularly in running with his ideas and finally coming up with the GNU GPL. His more recent lectures, however, leave a sour taste in many people's mouths, including mine. He seems more like a stubborn zealot than a freedom fighter. To demonstrate this, let's have a look at the Free Software Foundation's website, specifically the list of approved free OSes, and why some are not included.

My current OS of choice, Ubuntu, would definitely not make it in the list. Ubuntu offers me the choice right at the point of install to use proprietary codecs and drivers. Some included repos have proprietary software that I could install using the included package manager. Indeed, on my personal system I am very conscious of this. I am using the Adobe Flash plugin for my web browser, and I am using the proprietary Nvidia drivers. I have Skype installed. All of these things would exclude me from being a part of the FSF community.

So what if I wanted my system to be 100% free? I've some experience with Debian, I have a server here running LMDE. So I could switch everything to stock Debian, a distro that many, many other distros start from. Well even Debian is not on that list. And the reason is that there is actually proprietary code within the stock Linux kernel. Proprietary kernel drivers to be more precise. It also offers proprietary software through its repos.

This in of itself, that the FSF condemns Debian as a non-free OS, is enough to turn most people away. But there is another side to this coin. There are real, inherent dangers to proprietary software, and the FSF is really only trying to avoid those dangers, and warn others of those dangers at the same time. The more proprietary software that the kernel depends upon, the bigger the danger that it will at some point have to go backwards to replace those proprietary blobs in order to remain free (as in beer). There is always the chance that those blobs will some day come only with a price tag. Or cease and desist letters. If that day eventually comes, those blobs may very well have to be replaced using code from the FSF (accompanied by a very smug [and justified] "I told you so").

Some may say that this is overstating things a bit, and in the kernel's present state that may be. But the list of proprietary blobs occupying space in the kernel seems to grow with every release. Will the day come that Linux in its official state will have to be considered proprietary? Its present growth certainly indicates that as a possibility. Or at least that the proprietary code will become predominant over the free code. And all that proprietary code is owned by someone. As Linux grows in popularity (it does experience exponential growth every year), do you really think that at no point in time any of the companies that own that code are going to start withholding it pending licensing and payment? It's simply naive to believe that. That cost will then have to filter down to the user in order for Linux to continue in development. And the biggest side-effect would be (is?) that as Linux goes down the commercial path, it becomes more and more like the restrictive, closed environments that all of us, developers and users alike, wanted to avoid in the first place.

It's either that or Linux as a desktop will have to take giant leaps backward in functionality (and all the bad press that comes with it).

Thinking in this direction, I begin to see Stallman's point. This could all be avoided if the "narrow winding path" was chosen and all proprietary code was rejected in the first place. It would be a slower process, sure. Many things that can be done on a Linux desktop now wouldn't be possible--yet. All that proprietary code would have to be rewritten from the ground-up, reinventing the wheel as it were. But there would never be the worry of anyone owning the system. It's owned by us all.

This makes me thankful for the Free Software Foundation. In avoiding the "wide and easy path" they are ensuring the future of the Free Software philosophy. They are ensuring that computers will in some fashion always be affordable to virtually everyone.

So will I be switching to an FSF approved OS? At some point I think I'll have to, but that will not be anytime soon. I'm still dual-booting Windows. But when that day comes, boy will I be grateful that Richard Stallman stuck to his guns. He may be a thorn in the side of the Open Source movement right now, but he certainly has his place.

I think the Open Source movement needs to be reminded of its roots now and again, it needs people like Stallman to point out that this direction goes against the ideals that prompted the creation of GNU/Linux in the first place. It may not be nice to hear at times, but sometimes the truth hurts.

Thursday, October 7, 2010

Cloud Computing: Take Me to Your Data

Netbooks were a great idea when they first started to appear, weren't they? Just a tiny little laptop with enough power for a number of day to day activities, super-portable, great for note-taking and other classroom activities, really the perfect solution for any student--or professor for that matter.

So it wasn't surprising at all to see the release of some basic apps that were run completely from a web page. Word processor, spreadsheet app, etc. Netbooks can connect to and use these apps straight from a browser. And with Google's proposed Linux distro, Chrome OS, being made specifically for netbooks that would have links to these various apps by default, it seems likely that other netbook distros and Windows releases etc. will have their own cloud apps to choose from as well.

Of course this "cloud computing" idea isn't anything new. Web-based email clients have been around for over 10 years. Web-based IRC chat clients have been around for even longer. Soon after that primitive WYSIWYG HTML editors started to surface as well. So what is the problem with the expansion of this idea into other realms that could even be used professionally (and who cares if the big corporations providing these services give it a slick name like "cloud computing")?

This article is going to focus on Google, as they seem to be at the forefront of this whole coming change. Google of course has come under fire for their data-collecting practices revolving around the Google search engine. It's actually this practice that gets my paranoia-nerves on edge. Collecting IP addresses correlated with specific searches categorized in a searchable database is to me unethical to say the least. One can only imagine the purpose to something like this. I personally no longer use Google for my searches, and am very careful with the content I upload or view using other Google services.

Here I'd like to move away from Google briefly and talk about Facebook. Facebook is also a cloud service with a whopping 500 million users. It needs no introduction; everyone is aware of it. What you may not realize is that anything a user posts on Facebook is no longer theirs. Facebook now owns it. Here are a couple of quotes from Facebook's terms of service agreement:

“You hereby grant Facebook an irrevocable, perpetual, non-exclusive, transferable, fully paid, worldwide license (with the right to sublicense) to (a) use, copy, publish, stream, store, retain, publicly perform or display, transmit, scan, reformat, modify, edit, frame, translate, excerpt, adapt, create derivative works and distribute (through multiple tiers), any User Content you (i) Post on or in connection with the Facebook Service or the promotion thereof subject only to your privacy settings or (ii) enable a user to Post, including by offering a Share Link on your website and (b) to use your name, likeness and image for any purpose, including commercial or advertising, each of (a) and (b) on or in connection with the Facebook Service or the promotion thereof.”

"We may collect information about you from other Facebook users, such as when a friend tags you in a photo, video, or place, provides friend details, or indicates a relationship with you."

"We may retain the details of transactions or payments you make on Facebook."

And going back to Google, here is a quote from YouTube's terms of service that must be agreed to before uploading:

“…by submitting the User Submissions to YouTube, you hereby grant YouTube a worldwide, non-exclusive, royalty-free, sublicenseable and transferable license to use, reproduce, distribute, prepare derivative works of, display, and perform the User Submissions in connection with the YouTube Website and YouTube’s (and its successor’s) business… in any media formats and through any media channels.”

My main concern here is ownership of content. Never mind the privacy infractions. The following quote is from the "Terms" link followed from Google Docs, although I believe it is the same document for all of Google's services:

"By submitting, posting or displaying the content you give Google a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive license to reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute any Content which you submit, post or display on or through, the Services. This license is for the sole purpose of enabling Google to display, distribute and promote the Services and may be revoked for certain Services as defined in the Additional Terms of those Services.

11.2 You agree that this license includes a right for Google to make such Content available to other companies, organizations or individuals with whom Google has relationships for the provision of syndicated services, and to use such Content in connection with the provision of those services.

11.3 You understand that Google, in performing the required technical steps to provide the Services to our users, may (a) transmit or distribute your Content over various public networks and in various media; and (b) make such changes to your Content as are necessary to conform and adapt that Content to the technical requirements of connecting networks, devices, services or media. You agree that this license shall permit Google to take these actions.

11.4 You confirm and warrant to Google that you have all the rights, power and authority necessary to grant the above license."

As you can see, by uploading any content to Google's services, you give up any right to ownership and privacy as far as the content of the files are concerned. Beyond this we are faced with the "push" towards cloud computing like it is the future, and who is leading the charge to this bright utopia? Google of course! And people eat it up, even though it is completely unnecessary. Even tablet PCs now have the power to run the basic apps that are being offered from these online services. And if a file is created locally, no one has the rights to it but you. It's really no wonder that there is this push. In a time when no one reads these terms and conditions but blindly clicks "OK" and happily (and usually unknowingly) gives up all private and commercial rights to whatever content is being used, shared and/or being created, this is a fantastic way to get rights to virtually all digital content "in the cloud".

So there is no way that I would trust any of these services with any kind of sensitive data. There is no reason to state right in the terms of service that they have the right to share my data with anyone if the intention to do so isn't there. But here's my dirty little secret: I actually like the idea of cloud computing. I like the idea of owning a tablet PC with a 3G connection accessing my files from anywhere to work with or share at any time I please. There's a freedom in that idea that I can only liken to Star Trek. It's just really cool. So can we still use this technology and get around the Google problem?

Yes. And that solution does not involve choosing a different provider for these services. No matter what is in the service agreement the truth is I don't like the idea of just leaving all my digital content on a server at an unknown location, and someone else controlling my access. What I envision is cloud computing, where the cloud is my home PC that I can access from anywhere using my tablet. This can already be done easily by anyone with an FTP server/client setup. (Okay, this is not cloud computing per se, but is really the next best thing and has already been around forever). I think this will become more sophisticated though with a complete server application that can be left running on any PC at home. The tablet can then access the service remotely and the application launches the word processor app, for example, embeds it to a web page and sends it back to the tablet.

This is being done to some extent already with some applications. uTorrent, for example, has a web gui that can be enabled and works quite well. Many routers have a remote access option. I actually see this as becoming a standard in the major Linux distros. The option will be prepackaged and installed with the live CD. Will it catch on with the commercial OSes? No, probably not. The Googles and the Microsofts and the Apples will of course eventually start to charge for these services, probably as a monthly subscription. So they will not provide a service that makes payment unnecessary.

In the end I suppose it will make little difference to the average user, but there are businesses out there that use these services right now. It just seems like quite a leap of faith to trust a company like Google with anything business related. I think that people need to see that the idea of "cloud computing" is not a new idea at all and indeed has really been around since the popular inception of the Internet itself. Slapping a flashy new label on it shouldn't make anyone think otherwise, and it shouldn't mislead anyone into giving up the right to their own content. It is possible to have the convenience without using a third party service.

Indeed you can have your cake and eat it too.

Wednesday, September 22, 2010

Impulse vs. Steam -- My Thoughts on Digital Distribution Clients for Games

With Steam having effectively cornered the market as the online distribution system for gaming, perhaps this is a moot discussion, but I think it is still a topic worthy of debate.

I suppose the most obvious question to most people is "what is Impulse?" Impulse (from Stardock) is another gaming distribution system that is very comparable to Steam (from Valve). The two systems have many of the same games for sale, and many that are exclusive to their own respective platform. I will say that Steam does have the larger selection of the two. Aside from the actual games offered, there are a few important differences between them.

The biggest difference is that Steam requires that the client is running in the background in order to play a game, while Impulse does not. This may not seem like a huge deal at first, but the implications of this really are a huge deal. First, any game bought via Steam becomes inextricably tied to the client. They cannot be separated.

Another big difference is the file structure. Steam places all its games within its own program directory and many of the files are buried within its own proprietary container. This makes accessing individual game files an enormous headache and makes any kind of modding a near impossibility. This leaves one with the impression that games bought through Steam belong more to Valve than to the user. With Impulse all games are installed in the "Program Files" directory, just as if you installed the game directly from a store-bought CD or DVD, and the files themselves remain unaltered. The games you buy are your own to do with as you please.

To Valve's credit, Steam has been released for the Mac, making it the only game distributor to do so (that I am aware). Stardock has always said that it is and will always be Windows only. There were several rumors recently that said that Valve was planning a Linux client but those were squashed.

So what does all this mean for an end-user? Well in the end when all one wishes to do is play games, not much I suppose. Having to have the Steam client run in the background might cause some difficulty for older machines. I can imagine some graphics options being needed to be turned down or off in order to accommodate the extra running process. But in the end both systems do work very well, so it is a matter of preference.

I must note however that there are some other considerations to be made from a Linux user's perspective. Neither have a Linux client, but Steam does run under Wine very well. I have never been able to get Impulse to run under Wine. However with all of the game files at your disposal, it is not necessary to run Impulse as long as you have a Windows partition or in a virtual machine where the games can be installed and migrated over.

All in all as a personal preference I love the Impulse system. If a game I wish to purchase is available from both, I will get it from Impulse every time. That being said, I have a few games that I bought via Steam as they were unavailable from Impulse, but it does seem to me that Impulse is getting bigger all the time.

I just wish one of them would develop a native Linux client and start to port over some games.

Tuesday, September 7, 2010

Games, Pirates and DRM Schemes

I'm a PC gamer. I love RTS games and the keyboard/mouse input. I have an Xbox 360 with Xbox Live but I rarely to never use it and I will be canceling my subscription after it runs out at the end of the year.

For the most part I game on Windows. I pay for all of my games. Generally I purchase my games via Impulse from Stardock, but I do use Steam occasionally too. I like Impulse better for many reasons, but I'll talk more about that in a later post. For now, as the title indicates, I would like to talk about DRM schemes that infest many games that are out there and that I play.

There are some DRM schemes that I really don't mind. CD keys are one method. This doesn't really apply to games purchased off of digital distribution platforms, but it deserves a mention. This method doesn't install unwanted software and therefore will not disrupt my system. CD checks are annoying, but I can look past it. What I mean by a CD check is that the game will check the CDROM for the original game CD and refuse to start the game if the CD is not there. I have been guilty of using NOCD hacks to get past this.

Of course the meat of this post is going to revolve around the more intrusive DRM schemes like Securom. For the uninitiated, Securom is a program that runs completely separate from the game that can check for things like if you are online, how many times you've activated the game, and store whatever information about your computer it wants offsite.

Many users have complained about unwanted Securom side-effects including, but not limited to disabling applications like Nero Burning Rom and Daemon Tools, and disabling the write capabilities of some DVD drives. Later versions of Securom actually installs itself at the kernel level of Windows and in some cases has caused system instabilities or even rendered systems completely unbootable, necessitating a reformat.

At best, a program like Securom does what it's supposed to do, and say, will only let you play a game while online, or will stop you from activating the game after 5 installs (the number of current installs would in this case be stored offsite in a database somewhere else. Which is kind of humorous if it was Securom that necessitated the reformat/reinstall in the first place).

The 5 (or what ever number) install limit I can forgive to an extent. EA Games is famous for using this one. But if you do need to go past this limit a call to customer service will reactivate it for you, and I think EA did actually release a small app that will also reactivate it for you.

A game that will only let you play while online, such as Ubisoft's Assasin's Creed 2 is particularly maddening to me, especially since this game is single player only! It's insane to me that Ubisoft can't imagine a scenario where someone might want to play this game offline! No one there has brought a laptop with them on a flight?

The real crazy thing about this is that it hasn't stopped the piracy of the game. And of course the pirated copies of the game have Securom removed and it plays fine without an internet connection. It's this sort of thing that puts us honest people in a bit of a predicament. If I shell out my hard-earned money for the game, I have to play a crippled version of it, and feel like a chump. I could download the game using a file-sharing network and play it where I want when I want (for free), but have to live with that nagging feeling inside telling me that I shouldn't be playing.

So what is an honest person to do? I could buy the game and just use the cracked copy to play it. But that is just as maddening to me. If I've shelled out my money, I shouldn't have to waste my bandwidth on getting it a second time. The other choice (the one I opted for in the case of Assasin's Creed) is to just not buy the game, and forget about playing it.

I can't help but wonder how long these DRM schemes can possibly continue to live as a successful business model. In my above example, it hasn't stopped the theft of the game, and lost a paying customer--the exact opposite of the model's intent.

On the upside, there are companies cropping up that will not use any DRM in their software and make really good games. Ironclad is one, who's Sins of a Solar Empire won awards. Brad Wardell, CEO of Stardock (publisher of SoaSE) has spoken at length of the fallacy of DRM schemes, how they really only hurt the paying customer and do nothing to actually prevent piracy.

I actually bought SoaSE purely on those merits and ended up with one of the best RTS games I've ever had the pleasure of playing!

Monday, September 6, 2010

GNU/Linux: The Open Source vs. Commercial App War

I'm just going to come right out and say it: Linux needs more commercial apps, and from my personal point of view, Linux needs commercial games. In the Open-Source community this is sometimes an unpopular point of view. And I get it. I really do.

The strong point of the GNU/Linux operating system is that it is open source, and the thousands of apps that are immediately available for it are also (mostly) open source.

The benefits of open-source software are many. If a project gets abandoned by the original creators, other users of the software can continue to work and improve on it. The project never has to die. This is just one example of many. So when a company thinks about selling closed-source software for Linux some users get upset:

"Well, that's just fine and dandy. We don't need anymore proprietary software infesting our Free Software operating systems. Until Steam is liberated I strongly oppose its port to GNU+Linux."
--one comment on the recent news that Steam was not planning on releasing a Linux client of it's digital distribution system
http://www.linux-magazine.com/Online/News/No-Steam-For-Linux

To me this is short-sighted. For one, there will always be the choice to not install any particular piece of software. There is no reason to force others into your choice as well. This reminds me of the religious right trying to force TV shows to adhere to language and video standards to "protect the children," when obviously we are not all children or have/want children. The obvious answer to that complaint is to just not watch the show and not allow your kids to watch it either.

If I want to install a closed-source commercial app on my system that is my choice. You don't have to if you do not wish to do so. And if it really bothers you that much, you can always switch to FreeBSD. I don't think that there will be any commercial software for that OS any time soon.

The fact is that there is already a number of proprietary commercial apps for Linux. One example I would like to use here is the game Osmos from Hemisphere Games. On the day that the Linux version was released was also the day that Osmos had its highest sales figures.
http://www.hemispheregames.com/2010/06/23/linux-the-numbers/

The Osmos reports are telling. There are many in the Linux community that are willing to spend money on software. Enough even to produce a profit for those selling. So while there isn't a huge selection of commercial software out there for our chosen OS, I believe more will come. And I think commercial growth in Linux applications will be exponential. And if more high-profile commercial game-vendors release a Linux client, the more people will actually give Linux a fair shake and try it, either from a live CD or in a virtual machine at least. Let's face it, the Linux OS as a desktop environment is really for the home user. The price is right, even if the learning curve is slightly steeper than with other commercial operating systems. Although with the recent Ubuntu releases and new distros like Linux Mint, that is changing too.

All of these things are slowly but steadily garnering attention of more commercial vendors and as more becomes available, the more attention the average person is giving Linux. The two feed off of each other. Soon enough commercial software will become available for Linux that will make this entire debate moot anyway.

My History With Linux

Well it's been a month since my personal Linux experiment began, and it does continue. We'll say that this is actually many iterations in, and I have had several "personal Linux experiments". This one is lasting longer than the rest.

Perhaps I should clarify, as I have had Linux running here in different capacities for a number of years. So to rephrase, this has been my first successful Linux desktop experiment. I have a Linux machine here running several servers -- Samba for file-sharing, FTP for accessing files remotely (music mainly) and for a long while there was a shoutCAST server running on it as well. And in setting up these servers I became very comfortable with Linux the editing configuration files. But the big difference between Linux as a server environment and Linux as a desktop environment is this:

I can set up a server to run properly and then leave it alone. A desktop computer is meant to be used and interacted with on a daily basis. If I'm constantly using the command line to get things done, I become very dissatisfied with the experience. A desktop environment should be quick and intuitive. In fact the more transparent it is to the user, the better.

In a server setting, I would actually prefer to have a bunch of configuration files to be edited than to have to navigate my way around a bunch of windows and menus. I'll enter users within a command line interface and not care because I only have to do it once. I'll get fed up pretty quickly in a desktop environment if I have to enter apt-get in a terminal every time I want to install an application.

So my previous Linux desktop experiments failed because of this--not because it was too difficult, but because they were a pain in the ass.

As indicated above I started another iteration of my Linux desktop experiment, and this time I chose Linux Mint 9. Now in that time I have had to edit a couple of configuration files, but it was one of those one-time things which could then be left alone, so I didn't mind. I haven't really had to use the command line at all. Configuration options within the GUI are very intuitive. There are also Youtube tutorials now, which is something that I didn't really have before.

I then installed the KDE desktop environment and I haven't looked back, especially now with the release of KDE 4.5.1. KDE in its current form is the best desktop environment I've ever had the pleasure of using. I like Gnome too, which is very streamlined and perhaps still more intuitive than KDE. But KDE is like interacting with a work of art. It's just gorgeous! It makes me want to use it.

With this being said, I'm still dual booting between Mint 9 and Windows 7. There are still apps (games actually) that I have to run Windows to use. I really hope that commercial game vendors will in the near future be more willing to release their products for the Linux platform.

There are many in the Linux/open source community that do not want this. This will be a topic for my next post.