Rasteroids
Thursday, November 29, 2012
Moustache transplants
Saturday, November 17, 2012
Too Much Computing?
There's something that's been bothering me for a while. Computerized voting. Why? Because nobody knows what's inside a computer. Literally. Microchips are tiny, and they consist of billions of miniscule structures photographically etched onto a glass slab the size of a coin which, further, is encased in ceramic or polymer resin.
So, unless you have shaved open every chip in the voting machine and then analyzed it under an exotic microscope layer by layer, you don't know what was inside the machine you used to vote. You don't know what was in it before you voted, nor, more importantly, what it recorded while you voted.
The trouble is that there are many avenues for the introduction of flaws at various points in the production and supply chain for the machines and their software. Add to this the fact that we vote by secret ballot in the US, and there's really not much way to verify the validity of anything that comes out of an electronic voting machine, because all it contains is a pattern of magnetic charges that are only meaningful within itself, and to its manufacturer. It can claim anything it was programmed or wired to say, whether by mistake, or by malicious contrivance. On the other hand, with a paper ballot, there are minute details which, though they might not tie a ballot to a specific individual, they do distinguish the ballots from each other. Handwriting, in particular, for example. So, I don't see what's wrong with using old-fashioned pen and paper. Actually, I think voting is a rare instance where pen and paper are vastly superior to any mechanical or electronic method.
Tuesday, November 13, 2012
More Video
I think it's kind of strange and interesting how the GPU industry is turning PC performance metrics on their head. My new nvidia GTX 650 Ti will bring my computer's processing capacity to somewhere around 2 teraflops. With 1.8 tf of that residing on the GPU alone. Computers of any kind did not reach half of that capacity until 1996. I find that I never get tired of reveling in the idea that my ordinary middle-of-the-road gaming PC is computationally equivalent to a machine for which an entire research department or university would have paid millions of dollars as recently as within my lifetime.
I'm also struck, however, by the surprising asymmetry. There are 768 parallel execution units on this GPU. That is, it has hardware support to perform seven hundred sixty eight simultaneous operations at any given time. It's like a supercomputing cluster on a PCI card. In some ways, I wonder if it isn't the PC which is becoming a peripheral to the video card. I think there will come a time when the PC is relegated to little more than the role of IO backplane, while all of the interesting things happen elsewhere.
To answer why this is, I think we have to consider the burden imposed on x86 architecture by the desire for backward compatibility. As far as I know, it wasn't until the adoption of the amd64 architecture that the PC began to discard legacy features, and even then, this is only in 64-bit "long mode". This means that, in addition to all of the advanced hyper-modern features available on an amd64, there is still all of the supporting circuitry needed to run applications from 1978. Try booting to DOS sometime. It works perfectly. I suspect that CP/M would be no different.
I don't understand why this is. Can't Intel and AMD sell special legacy kits for people who want to run ancient software? Why not just emulate old computers entirely in software? There is absolutely no shortage of processing power to do that, plus there are substantial benefits to virtualization, like snapshotting, as well as the extension of the modern OS's capabilties into the emulated environment.
I think that, at some point, Intel and AMD should simply start over from scratch and design a chip around a modern instruction set. With the recent progress on virtualization technology, I tend to hope that this is the direction in which they are headed already. Even as things are, if I needed to run some sort of x86-based PLC written in QBASIC, I would have no problem emulating several dozen such machines entirely in software. So what is the point of hamstringing the transistorspace with three decades' worth of backward compatibility? Because a PC CPU has, not some, but ALL of the features that were ever implemented over the course of my entire lifetime. 8086, 286, 386, 486, and on, and on, and on. All of those features, many of which are non-features to me, as well as most users, take up space and waste energy.
The GPU, on the other hand, is not burdened by any such issues, and can be designed entirely for raw performance. So, I think that if the results which GPU designers are achieving are any indication, then it means that we're designing CPUs wrong.
Monday, November 12, 2012
Video
My previous video card was an nvidia 8800 GTS from the now defunct BFG technologies. Before they went out of business, they sold a lot of nvidia GPUs with extended warranties standard. A great many of these cards burnt out, as is attested by the widespread complaints which can be found on the Web. My card was one of these.
This is the only bad thing that I can say about nvidia and, in reality, it's hard to say whether the fault can be attributed to nvidia at all since, at least in my case, it was a factory-overclocked card. This was the reason for the extended warranty. The reasoning was supposed to go "Sure, the chip is overclocked, but it comes that way from the factory with an extensive custom heatsink, and the engineering and workmanship are guaranteed". Which is a fine theory until you realize that once things begin to go south, they can simply close up shop and run, which is what they did.
So, anyway, aside from this one thing, which may not be an nvidia problem at all, I can only say good things about the company. In particular, their Linux support is first-rate. It's really good to see a prolific, leading hardware manufacturer to whom Linux is not some sort of stepchild.
I get the feeling that a lot of companies dedicate a single evening meeting to Linux where they go "Ok, so $x thousand dollars should be a suitable Linux budget for the period ending FOREVER, and if that should chance to result in a driver which works at least some of the time, for some applications, then great. And I ordered mocha, not decaf." And again, I realize that Linux isn't important to everyone, but I really, really like it, and I think it would greatly further the computing industry both in terms of culture and productivity for Linux to be brought further into the mainstream.
Sunday, November 11, 2012
VirtualBox
I really like VirtualBox so far, but there are a few caveats. First of all, I find that I have to use PIIX chipset emulation because ICH emulation (which is marked experimental) dies frequently and nastily running Vista. This problem so far appears completely resolved running an emulated ICH board.
The other thing is that installing the VirtualBox extensions on Vista is a pain because Vista locks d3d9.dll, so the VirtualBox installer can't replace it with a paravirtualized version. The solution is inconvenient. First off, you have to mount the VDI file containing the Vista install so that you can rename d3d9.dll so that Vista will stop locking it immediately upon boot. However, contrary to the instructions I've found scattered online, I've been completely unable to mount VDI files directly. Instead, I use:
ionice -c 3 VBoxManage clonehd --format RAW in.vdi out.img
and this converts the Virtual Disk Image file to a raw disk image which, as far as I know, is just the concatenation of all of the data on the disk as it would appear on a physical disk. "ionice -c 3" is optional but, on my system, it prevents the IO operation from hogging all of the system's IO-time, so that I can continue doing amazingly important things in the foreground, such as this blog entry.
Great. So, now we have a disk image, but we can't mount an entire partitioned disk. So we have to do:
parted out.img
u b
p
and that gives something like:
Model: (file) Disk /home2/wine/out.img: 69632786432B Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1048576B 69631737855B 69630689280B primary ntfs bootwhere the value in the "Start" column is the byte-offset into the image at which the first and only partition begins. The intervening data is probably stuff like the partition table and MBR.
FINALLY. We can do:
mount -o loop,offset=1048576B out.img /mnt/vbox
mv /mnt/vbox/windows/System32/d3d9.dll /mnt/vbox/windows/System32/d3d9.dll.bak
umount /mnt/vbox
At this point, we discover that VirtualBox cannot directly load raw disk images, so we do:
ionice -c 3 VBoxManage convertfromraw out.img nod3d.vdi --format vdi
And now, finally, we have a vdi file which is exactly like the one we started with, except for the name of a single file, and now we can boot to Vista and install VirtualBox Guest Extensions, complete with the experimental D3D driver, which I hope will work instead of bluescreening every few minutes.
Incidentally, I've somewhat accidentally hit upon a neat strategy for speeding up my virtualized Vista install. I've found that by placing my base disk image on my SSD, it effectively separates my base Vista install from all subsequent snapshots, whose location defaults to my home directory, which is an old platter drive. The base install is 21(!!!) GB excluding the pagefile, and this is mostly libraries and executables that never change, so the SSD is a great place for this data. But because I always run this disk image from a differential snapshot located on a platter drive, the base image is effectively read-only, so I don't have to worry about Windows thrashing my SSD to death with constant write operations. And best of all, I don't have to worry about shoving a non-standard configuration into Windows, which is apt to break irreparably the moment you do anything out of the ordinary with it. It's all handled by Linux and the VB hypervisor. As far as Windows is concerned, it's just writing to a single regular hard drive when, in fact, all of the system files are read at high speed from an SSD, while all of the pagefile, document, and application I/O is quietly routed to a hardy old mechanical drive. And best of all, I can run Windows without listening to the tortured disk head grinding I've come to associate with it
Monday, November 5, 2012
PunkBusting in WINE
The infamous PunkBuster anti-cheat system uses debugger-like techniques and is similar in function to a typical antivirus, except that it is dedicated to game cheats rather than the usual types of malware. It is a client-server arrangement. The PB server runs alongside the game server, and the client lives in the background on each player's machine. Mostly, the client sits and watches the game's memory space searching for various anomalies which it considers to be indicative of the presence of a cheat. If it finds one, it reports the player to the server, which then decides on a corrective action which can range anywhere from a warning, to ejection, to possible global blacklisting, depending on the nature of the fault.
You can disable PunkBuster on your client if you want to, but most servers will treat this itself as a minor infraction, and will quickly kick you to ensure that you don't interfere with the the other authenticated players.
PunkBuster installed under WINE without any complaints, though, strangely, I had to assign it a library-mapping exception directing WINE to use its built-in version of crypt32. It's odd that it required the WINE-version of the crypto library to run, but it installed fine. The question of whether it works, on the other hand, is a different matter.
I have identified three important background processes which are important PB components. These are PnkBstrA.exe, PnkBstrB.exe, and PnkBstrK.sys. PnkBstrK.sys is a Windows driver, and this is the component which fails upon joining the server. When I think "Windows driver under Linux", right away, I expect that this is going to be trouble. After some cursory digging around, I found, to my surprise, that the driver is successfully loaded by WINE and it runs in the background as it should until it is called upon to perform an unimplemented NT kernel call, at which point it dies.
The call in question is PsLookupProcessByID(), which is easy enough to implement, but the real trouble is the subsequent call to KeStackAttachProcess(). The dearth of documentation for this call is not surprising, but from what I gather, its purpose is to attach the memory space of a user process to the driver so that it can go digging around in it. Given the nature of PunkBuster as an application, it's fairly clear what the driver component is trying to accomplish before it errors out. It's saying "Excuse me, ntoskrnl.exe, I would like to scan X process' address space", and WINE says "I don't know how to do that". PunkBuster fails its scan, and I get kicked from the server.
Not being able to run PunkBuster is a fairly severe limitation on WINE given that, at least in my view, its main purpose is to enable Windows games. So, I think it should be a fun challenge to develop a patch that makes it work.
The first problem at hand is the issue of address space collisions. Under Windows, every user-process' address space is partitioned into two areas. The low area belongs to the userspace process, and the high area is mapped into systemspace. One consequence of this arrangement is that a systemspace process can map any entire userspace image into its address space, in its original position, without colliding with or overwriting any of its own data. This is what I assume KeStackAttachProcess() is supposed to do. And although it's easy enough to implement using shared memory, the first thing I need to do is modify WINE so that its "systemspace" loads into a reserved area mirroring the Windows behavior.
Certain WINE components are always required for every process, driver or otherwise. These are "wine" and "wine-preloader", which I have successfully relocated. Now, I just need to modify the PE loader to recognize a "systemspace" load, and make it offset all of the dll images appropriately...
Adventures with WINE Part 2
Many modern games require a Web browser for setup, updates, and multiplayer. So, my first priority was to get a working Web browser. WINE comes with its own Internet Explorer replacement which is based on Gecko and nominally works, but it doesn't have all of the features of MSIE. Importantly, many apps embed Internet Explorer as a component for menu rendering and Internet I/O, so having a working form of IE is important.
Using WINE you can, in fact, install the real Internet Explorer 8 from Microsoft and run it under Linux. It doesn't work perfectly, but it works well enough to host the plugins you need to launch games. Windows Firefox, on the other hand, does seem to work perfectly. It works exactly as usual, even installing and running plugins from the Web. Still, it's important to get IE up and running because some applications might stubbornly embed IE in Firefox(!!) simply for the sake of browser-portability. So you STILL need a working IE, and installing MSIE 8 using WINE's "winetricks" script is probably the best way to do it.
So, now I have two working Windows-based Web browsers running under Linux, and this is pretty good. However, the next problem I notice is that SSL is broken in IE. SSL is critical for games because the portal utilities, or whatever they're called, need the cryptographic layer for licensing and authentication. You can get around this problem, again, thanks to winetricks, which automates a large number of WINE tasks, many of which revolve around automatically downloading and installing "native" Windows components.
I found that tinkering with native versions of various Internet-related libraries allowed me to get SSL working in IE: wintrust, secur32, crypt32. The native Windows versions represent a complete implementation in contrast to the WINE versions, and so they work better. They aren't included by default because they are owned by Microsoft, but nothing is to prevent the user from obtaining them from Microsoft.
I want to start out with something basic, so I decide to try out Battlefield Heroes. I use Firefox to navigate to the game's Website and it happily downloads and installs all of the necessary plugins and applets. The game downloads and updates itself. This is getting exciting. It launches, it runs, and the framerate is good despite a few console warnings about unrecognized shader definitions. So now, it's time for the moment of truth: multiplayer. And guess what else? It works. For all of about three seconds, at which point the server ejects me for a PunkBuster failure. And so begins the next chapter.