Thursday, September 27, 2012

SSD

This is another one of those posts where I marvel at the amazing pace of hardware technology. I just purchased a 120GB OCZ Vertex 3 solid-state drive. This thing has a throughput of 0.5 GB/s. That's right. It can dump the entire contents of a dual-layer DVD in roughly 17 seconds. To put that further into perspective, this drive is roughly as fast as four or five ordinary drives operating in RAID 0 in terms of throughput alone. Because the drive is a departure from the mechanical drives which had previously been the standard for roughly two decades, it has no moving parts and therefore zero seek-time. There is no physical distance between any two addresses on the drive, and all accesses have the same latency, which, between the SATA controller and the Sandforce chip is close to zero for most purposes anyway.

The only drawback with SSDs is that each page can only be written around 3000 times over the lifetime of the device. This is keeping in mind that mechanical drives are also limited by the physical endurance of their spindle and head bearings. Just the same, though, I want to minimize writes to my SSD, so here is what I did. I split my root filesystem between my SSD and my old 0.5TB mechanical drive. I divided things up thusly:

SSD

  • /etc
  • /usr
  • /lib*
  • /bin
  • /sbin
  • /opt
  • /boot


HDD
  • /home
  • /root
  • /var
And then I mounted /tmp on tmpfs, which is a type of ramdisk. Here, the SSD basically contains anything and everything that is expected to change less often than on a daily basis. This is, effectively, the entirety of the system minus /var and the home directories. So, at this point, I'm imagining the sheer loads of bandwidth which will be streaming into memory from the SSD, unfettered by the limitations of a clunky disk arm when it occurs to me that I haven't yet calculated just how much data is on my SSD. So I do a du -xshc / and I get 3.9GB. What? That's right, I'm running Linux, so despite having thousands of programs loaded, I could theoretically cram absolutely all of them, and all of their libraries, plus the kernel and all of the system programs, plus most of my system settings into system RAM simultaneously and still have two gigs free, which is incidentally what I feel like I have left over when I'm running a completely idle Vista system.

Of course, in real life, the system only loads what it needs, which is why my system memory is nearly empty all of the time. And so ends my one hundred and thirty fifth thesis on why Vista is a useless bloated carcass of an operating system.

Here, I'm basically reiterating a theme that comes up frequently in my general ponderings about pretty much everything. If you only ever see things one way all the time, you begin to take it for granted, and pretty soon what you have becomes invisible. Then, one day, you find something new for comparison and you wonder what in the world you were thinking all along. In this case, I've been wondering where all of the hardware innovations go. They go to waste on operating systems whose primary domain of innovation is to find new ways to squander resources. You see, OS developers have timelines and budgets, and we wouldn't want to burden them with the trouble of applying common sense, because that takes effort. What we do, instead, is we drop a fat, sloppily under-engineered, over-managed project in the users' lap and let his hardware dollar compensate for a profoundly shoddy product. As long as nobody ever tests an alternative, they never notice that the quality of their OS is in freefall because the continuous progress of hardware technology cancels it out.

Wednesday, September 26, 2012

iPhone

Just saw a video where an iPhone was glued to the sidewalk and they filmed random people attempting to pick it up. At least they found a use for it.

Tuesday, September 18, 2012

Desktops

I now count myself among the ranks of GNOME refugees. I liked GNOME 3 until I realized that Nautilus will report spurious non-specific "file not found" errors when network-copying large file trees. Since I'm using Linux to avoid exactly this sort of thing, and GNOME is basically built around its file manager, I decided to ditch it for xfce, which happens to be designated for Debian's next official desktop.

I like it. It's simple, it's clean, and the controls are attractive enough, although the default icon set could stand some improvement. Best of all, I don't know how else you can get a windowed environment on a modern computer which idles at under 500MB of RAM, or roughly half of what Gnome requires.

The main trouble, under Debian/squeeze, is that because it isn't the default desktop yet, there is no metapackage, and you have to go hunt down various apps and features which would have otherwise been loaded and preconfigured as a package-manager task alongside GNOME.

Thursday, September 13, 2012

Please wait...

I just wanted the world to know that I'm currently attempting a Wine install of Battlefield 3 while running Windows Update in a VirtualBox VM, all under a custom kernel in Debian, with my home-brewed Radeon driver, and this is the sort of thing that I find terribly exciting.

On one hand, I expect the Wine install to fail simply because reimplementation of the Windows API is such a huge and literally interminable task. On the other, I expect VirtualBox to run Windows apps perfectly, but at a significant performance overhead.

At any rate, I have both Windows environments neatly sandboxed so that if either one gets a virus or simply implodes, as Windows is wont to do, it doesn't affect anything. I just rm -r and cp -r, and everything is fine again, which I realize is a radical concept to those of us who are accustomed to dealing with a system Registry. Wine, I compiled and installed locally to a dedicated user account whose sole purpose is to run wine. This way, if the compatibility layer fails, or an app flips out and decides to delete everything, then the most I can lose is the wine user's home directory. VirtualBox, on the other hand, is inherently a sandbox, so there's nothing special to do.

As part of my Windows emulation binge, I took a look at ReactOS, and I frankly can't comprehend the purpose of that project. To me, the entire point of reimplementing Windows is so that you can use Windows apps without Windows. I suppose the point of ReactOS is that you can pretend as if you're running Windows without having paid for it, and that seems completely pointless to me. When I want to run Windows, I simply shell out the $100 and away I go. If I'm going to expend a ton of effort to run an operating system, I'm going to do it to run one which is better, not one which is a clone of something that doesn't work well for me in the first place. And realistically, nobody will ever catch up to the Windows API because replicating all of the bugs in semi-documented APIs is really difficult and the Windows API is a moving target anyway -- not for the benefit of the user, but as a consequence of what I suppose could be called marketing-driven engineering. Which is a phrase I use to mean "We change stuff so that we can increment the version number and redesign the retail packaging."

Oracle

is seriously beginning to grow on me, though maybe it's because some of their best stuff came from Sun.

Wednesday, September 12, 2012

Eclipse

is great, in case I didn't belabor the point sufficiently. I'm surprised at how fast it is. Under Linux, you can scarcely tell that it's a gigantic Java app. It's no slower than commercial products I've seen, but when it does bog down, it's usually because it's doing something useful, like indexing the daylights out of the entire linux source, or whatever you're working on. It makes extensive use of multithreading in the GUI, so even when you have a build running while indexing source in the background, you can continue editing as if you were merely idling along. It is an absolutely brilliant product. It compresses something like half a dozen virtual consoles' worth of tools, browsers, status displays, and error reports into one interface. I am absolutely tired of telling myself "Did the developers actually bother to try this product before releasing it?". Eclipse is not one of those products. It's very obvious that the developers use Eclipse to develop Eclipse, because it's so useful and well thought-out. The only other product I've found with Eclipse's apparent mind-reading capabilities is Google.

The main caveats I can offer would be to use Oracle's Java runtime, and you will get fast and stable performance. Unpack it into ./eclipse/jre, restart Eclipse, and from there, it just works. Also, avoid shoddy or outdated plugins.

Tuesday, September 11, 2012

Video packages

I was surprised at how much work went into building my own debs for my video drivers. It isn't any problem with the Debian packaging system, which has been automated even further since I last looked at it. It's just that when I download something which is labeled a driver package, and I run the provided installer, I expect it to be a finished product unless otherwise stated. This was sort of like ordering furniture delivery and finding a big box of parts on my doorstep, and no manual.

Anyway, at least the pieces seem to have been well made and everything works so far. AMD's Stream SDK installed without issues and clinfo shows all processors. Google Earth shows some instability, but I'm not sure what's causing it. Hopefully an update will fix it later.

Everything works. I have OpenGL, OpenCL, audio, and network printing, just like with any commercial OS, except that there is no junk in it. Everything runs at a crazily fast pace, and the only way that I have ever run short of RAM was by running lzma with completely unreasonable dictionary settings.

Wednesday, September 5, 2012

Radeon HD 2000-4000 on Debian, Linux 3.5.3

As a developer, I really like Debian for a number of reasons, with its emphasis on reliability, customizability, and efficient minimalism. The drawback, though, is that the fastidiousness involved in all of the testing and validation means that the current "stable" release is generally well behind most other distributions. So, if you want the latest and greatest of something, you often have to either backport a "testing" package, or simply build it yourself. Fortunately, for the kernel, which is in a constant state of flux, there is the "kernel-package" package which makes the proper installation of custom kernels trivial (I believe there is also an Ubuntu kernel-package).

Ok. So, here is my take on the current state of Radeon HD support under Linux. Unfortunately, the open-source support native to the kernel is not so good because ATI/AMD considers the methods by which it accomplishes most of its nifty stuff proprietary. This is probably because they are afraid that their source code would reveal details about their hardware components and ASICs and so forth. So, the bad news is that high-end 3D acceleration and OpenCL is pretty much out of the question -- at least within the realm of strict open source, that is.

The good news is that AMD has released its core IO library in closed-source binary form, which is the next best thing. This is wrapped in an otherwise open-source driver which exposes the various hooks the closed-source library needs in order to call the operating system and allocate DMA memory and other such low-level things. So, although the closed lib is a black box, there is still an open-source glue layer which we can update in order to maintain forward kernel compatibility even without ongoing manufacturer support, and this is always nice, because manufacturers can be really fickle about legacy support, especially towards Linux. Some people consider a mixed-source driver a half-measure. But I look at it this way: the vast majority of the time, the OEMs don't release their IC diagrams, nor even their firmware source code. So how is this much different? Anyway, it works! So, I'm happy.

When I downloaded the latest AMD driver package, I was disappointed to find that not only is the installer itself broken, but the latest stable kernel, 3.5.3 breaks compatibility with the driver. So, in the true spirit of Linux, I fixed it. You can download the makefile I used here. You will also need the Radeon HD 2000-4000 series x86 64 drivers from AMD. To use, simply configure, build and install your kernel, as usual. Then place the two aforementioned files in a directory of their own and do

make build-pkg
sudo make install-pkg
sudo aticonfig --initial


and reboot. Detailed instructions are in the makefile, which you can open in gedit, and it will conveniently highlight the comments in blue. Read them and follow the instructions carefully lest you hose your system. So far so good, for me. fgl_glxgears runs at 3000 fps, and the rss-glx screensavers look great.

[UPDATE]
The new version of the Radeon HD 2000-4000 install wrangler for Debian is up, and it fixes the PM issue, as well as a problem with incomplete uninstallation. That solves all issues that I know of.

[UPDATE] Now builds native .deb packages.

Saturday, September 1, 2012

Makefile parallelism

Despite the improvements which could be made to improve error-checking, makefiles are actually a pretty great invention. A makefile build-rule is of the form:


A: B
   F

Where A is a set of targets, B is a set of dependencies, and F is a list of shell command templates to run top-to-bottom in order to obtain A from B. That's it. All you have to do is type up a list of true statements about the relationships between the steps in your build process, and make automatically figures out what needs to be done, and in what order. Not only that, but on request, it will figure out how to divide the steps up into groups which can be run in parallel so that if you ask it to build Z, and it knows that J, K, and L are not interdependent, then it will build those three all at the same time. But if it knows that J,K,and L depend on A, then it will complete A first. How useful is that? Very. You only run into problems when you think that you have specified a rule adequately, but due to some technicality of syntax, something didn't match up, and you get a different result from what you intended.

I was just reading an article by a guy who says that he launched a company to deal with the "problems" in parallel make. If you don't like the syntax, that's one thing, but the principle of the thing is not broken. If make is building things in the wrong order, it's because you omitted or misstated a dependency. Going back to my archive-unpacking example, I later realized that I was still missing something. Say I have:



mytarget: oem/a
   dostuff etc, etc.

oem: myarchive.zip
   unzip $< -d oem

oem/%: oem
   @echo


If you say "make mytarget", make's logic goes like this: ok, I need oem/a. oem/a does not exist. Do I have a rule that matches "oem/a"? Yes. oem/% matches, and it says that I need to make "oem" first. There's an explict rule for making oem, and it says to unzip it.

That's excellent. However, there's a problem with it. From make's perspective "oem" has been previously completed if a) it exists on the filesystem and b) its modification time is later than that of all of its dependencies. That doesn't take into account the case in which oem is half-done, like if the user hits Control-C during decompression and then runs the build again later. So, I will do this:



mytarget: oem/a
   dostuff etc, etc.

oem unpacked_flag: myarchive.zip
   unzip $< -d oem
   touch unpacked_flag

oem/%: unpacked_flag
   @echo

Much better. Now, I have defined success in terms of the existence of a file named "unpacked_flag", which can only exist after archive extraction has run to completion. So, when make goes and spawns seven subprocesses to pursue various branches of a build, and it sees that they all depend on oem/%, which is short for "anything in the oem directory", it will block all of those processes until the empty file "unpacked_flag" has been built, signifying successful extraction of oem/.

Great. Simply by having been both complete and accurate in our statements about our build dependencies, we automatically have a thread-safe build script. More specifically, a correctly written makefile and a thread-safe makefile are the same thing. Which makes me think: essentially, aforementioned Makefile Guy started a company to service the sector of the programming community which is plagued by broken and badly written makefiles, though he won't say as much explicitly lest he offend his market. Talk about depressing. There was one more thing that got me, though. I had:



oem/patchme:
   echo 'append this' >> $@


This says to build "patchme" by appending 'append this' to it. Obviously patchme is dependent on its own existence in order for this to work, but make cannot handle circular dependencies. So, we can't specify it as its own dependency, and it has no other dependencies, so we have stated that it has none at all, and make will simply give up. So, we have to explicitly say:



oem/patchme: unpacked_flag
   echo 'append this' >> $@


That's it. It's elegant and it makes perfect sense.