Friday, February 29, 2008

Blog depreciated

This blog is now depreciated, all the ideas currently posted have been uploaded to:
http://brainstorm.ubuntu.com


I will be junking this blog pretty soon..

Wednesday, January 16, 2008

Ethernet/Firewire Target Disk Mode.


Apple have designed macintosh hardware in a way that it is trivial to access data on another mac, as if it were an external Harddisk.

Apple's method of doing it is:
1) Turn on the computer you want to treat like an external harddisk and hold down T on the keyboard.
2) Plug in a normal 6-pin firewire cable between both Mac's.
3) All mounted devices (USB memory sticks, inserted CD's and HDD's) will appear on the other computer, as if the computer was a external drive.
4) You may boot off the external drives as if they were internal, and use them as normal.

Some potential uses of this technology are:
1) Easier to service the computer. If the software is dodgy, you can easily determine this because the computers data will still appear on the other computer. This means that you can also copy data off even if it wont boot.
2) Boot off a remote computer. If the video card is faulty on a computer, you can simply put it into target disk mode, start up the other computer, and it will be as if you were on the other computer.
3) Easy migration of data. Many computer manufacturers such as IBM's or Dell's will not allow you to open a computer within the warranty period, so it is not easy to migrate the data over. You cannot just take the harddisk out and put it in the new computer. With this, since the computer can be made to appear as if it were an external Harddisk, it is effortless to program something to migrate the data over.

One problem however is that Apple implements this via EFI or Open firmware. While many PC's are now getting EFI support, its not that common.

One method we could use to deal with the lack of EFI, is to simply include a modified kernel which instead of mounting the directories, etc, acted as a block device over firewire/ATA over ethernet so that the harddisks would appear on the other computer. Loading the kernel would be done in the same way as it is currently, via grub (or even put the kernel and grub on a memory stick, which could be offered to be created during install).

To make the process efficient, grub should employ a hotkey, similar to Apple such as T, because if the video card is broken on a laptop, and you simply want to copy the data off, you wont be able to see the boot menu (in fact, we can beat Apple's implementation by beeping if it is successful, so users dont stand around 5 minutes wondering. You could also offer logging via the serial port too).

This would benefit not only linux, but any computer, and I cant imagine it would be difficult to code either, especially via USB stick (no harder then coding support for a firewire device). In fact, if properly developed, it should even work on macs. It would also allow motherboard manufacturers to include a flash drive on their system with the code.

The advantages of using a kernel on a memory stick is that reading block by block off the harddisk later, is a completely safe, block by block operation, and it would make it possible to even format the harddisk from another computer, as nothign would be used on the harddisk.

This should be easy to do, and would benefit everyone (it could be even used for data recovery), lets get to work. Remember, many computers are under warranty, so users dont like opening them, and many users just want a simple way to copy all their data off reliably, without takign the computer apart, or loading a liveCD to check if the data is still there.

Configure-less Boot Loader



There should be no reason that anyone needs to mess around in Lilo or grub to boot from any OS. Don't get me wrong, you should be able to if you want to, but the default should be to configure itself, and be capable of identifying bootable devices to boot from. In the case of linux, you should also be able to select from a list of kernels, which are automatically identified (maybe maintain a /boot/kernels directory with symlinks to the active kernels, and design a way to have different parameters for various kernels easily)

On OSX, you simply need to hold down Option, and all bootable devices are listed. No 10 second delay needed so you can select an option, and the only option to set (if required) is the default bootable device. I should also point out that the Apple implementation also allows you to easily identify the operating systems due to the use of icons. Maybe we can learn something off the rEFIt project for Mac (http://refit.sourceforge.net/screen.html). Changing the default startup disk in OSX is performed entirely via the GUI too (if preferred). It should be this easy on Linux.


Grub would need to be modified to look for specific files on different devices (or MBR's on different drives), and offer them in the list. It would make a big difference in usability.

Networked screensavers (using Bonjour/mDNS)

(Doom screenshot from http://www.freakygaming.com/gallery/action_games/doom/berserk_brass_knuckles)

We should investigate new types of screensavers which are network based instead of the old traditional single-computer screensavers. Probably the oldest example in the book is a network wide fish tank. Fishes swim to other computers, and the longer a computer remains idle, the larger its fish becomes. Fishes evolve into swarms, etc. If you want to go spore like, it could be that computers that idle longer, evolve into more refined communities which fight others. For the more G rated crowd, why not turn it into a computer lab full of "go fish" player PC's. Or, even better, why not have a whole lab full of computers playing Doom. The faster the CPU, the better the computer performs in game.

Even old screensavers composed of bouncing cows or flying toasters, while nice, could be nicer if they were distributed in some form. Not great for a lab performing CPU clustering, but in a Lan party, you can still beat others while being away from your computer for a short time.

Lame I know, but still, its something which nobody else has done for anything other then scientific processing like folding at home. In a graphics lab though, it may be hard to convince them to burn CPU cycles on protein folding. But nobody minds watching computers take on a mind of their own

Never seen this on Windows or OSX. We would be ahead on this one. And it would make people actually want to leave their computer idling. Special precautions would need to be taken though to ensure secure.

Touch screen ready

We don't really have anyone working on Linux touchscreen optimisation, but we should. I would like to see the KDE/QT, gnome/GTK and E/EFL teams to start seriously taking multitouch into consideration, and start making changes to make multitouch easy (such as supporting gesturing support on multitouch devices/mice, trying to get basic TrackIR support for Linux, and maybe start offering profiles which changes the way the desktop is designed to aid different types of users.

For instance, a basic difference in touch screen computing and basic computing would be that touch screen users prefer icons to lists of files in their file manager. With proper gesture support built into the libraries, it will also allow developers to easily add support in their applications. If you don't get onto this now, we will be slow to catch up again. We have a good opportunity to pace forward and win in this because Apple has been slow at utilising multitouch (in fact, dismal, the iPhone supports 1 gesture, zooming), and Microsoft surface is too expensive for normal users, but rest assured, they WILL ensure that they are ready for it in the new windows.

ZFS for Linux

We have to get the ZFS filesystem working properly in linux. I wont go into details on the underlying concepts of how ZFS operates as it has already been well documented at multiple locations. But further information is available at: http://www.sun.com/software/solaris/zfs_learning_center.jsp


OSX supports reading support of ZFS already, and its only a matter of time until write support is added. The biggest benefit ZFS would bring is an easy-to-use, fast, extendible raid filesystem for linux.


Implementing it would finally bring RAID to mainstream linux. Many linux distributions still refuse to include RAID support during installation, and many that do, don't support it properly. In fact, one distribution I tried errored out while it was initialising the RAID. Compare this experience to OSX which has had easy to use raid for ages. However, ZFS will succeed where OSX's fail. ZFS supports almost infinite storage, and is easy to use.


Also, Apple's implementation is seemingly buggy. The other day for a demonstration I pulled out a drive whilst it was being rebuilt, and Apple Raid would no longer accept it back into the raid, thinking that it was a freshly formatted drive. Also, Apple's raid does not appear to be flexible enough to add additional harddisks after initial setup.


A proper ZFS implementation would require that after popping in a new harddisk that is unitialised, that the user would be prompted if they want to attach it to the ZFS partition, if one exists.

Password security tracking for secure textboxes all round

The secure password textfields in the various GUI toolkits should be upgraded, so that they have a security rating next to them, or a way of accessing their security rating. "AAAA" is obviously a lame pass, which would get a lame rating, while "h^4Fjh7fd:'8^gS``Z'" is obviously something which would rate highly. WEP/WPA passes would follow different criteria, as they need to be a lot longer to be secure


There should also be a way of easily automatically generating secure passwords too as Apple does. Apple places a picture of a key next to password textfields (in inconsistant locations) that when clicked, brings up a password creation dialog. It allows you to create a password randomly, depending on preferred length, if you want it to be memorable (ie. "2frog67Beer,"), if you want it to be alphanumeric, etc. New users still believe that a password like chicken is secure because its 6 chars, when in reality, its an easy dictionary crack, lets make it obvious that its not.

Beagle (or other indexing tools) Plugin GUI creator, File data representation for Coding IDE's


When Tiger was released, Apple released a library known as core data to developers, that allowed users to represent data as entity and other various UML diagrams for data management. It would be awesome for programmers if they could represent their files as a UML diagram, and the framework of their code is automatically created for them. Apple built this into their Xcode utilities.


Even better however, would be an extension of it. All software engineers have drawn dozens of diagrams of a file structure before. There are a few examples of data storage on Hans Reiser's Reiser4 webpage http://www.namesys.com/v4/v4.html. What we could do is design a gui to represent data into the standard development tools (or even a plugin for Eclipse).


The ability to easily represent data in coding utilities would act as a type of self documentation, and lazy programmers could even use it to develop parsing code for their programs (ie, create structs for headers and create functions which read the header in one block, etc).

However, the real advantage comes when you look at indexing programs such as Beagle. Beagles sole purpose is to parse files. So this could allow anyone, programmer or not to grab the specs for a file type, and draw them as a diagram. I have a very simple example above. In that case, you would tell the program to concatanate all letters. Since the tool knows how the data flows, it should be easy to represent


If we keep the raw diagram files, it means it will be immediately obvious to other developers if there are any bugs in the code too. It is a lot easier to spot bugs in a diagram, then bugs in 1000 lines of parsing code. We could also possibly use the diagram for validating files too then, to check if other programs meet the standards exactly. Finally, we could also use a file representation tool as a security tool, and generate files which step out of bounds in different locations automatically, to test our programs.

I have never seen anyone do this yet. It would be a first, and it would be no harder to code then something like Glade or GUI creation tools.

Integrated parental Controls

This summary is not available. Please click here to view the post.

Automatic recreation of config files

There should be no excuse for programs not working if I delete their config files unless its globally available (such as apache). If I delete the Xorg.conf, next reboot, Xorg should still be able to boot, and there should a working copy of xorg.conf sitting there for me, with full commenting.

Linux distributions need to get out of the habit of creating configuration files for every program that runs as root. The programs requiring it should instead of capable of creating them. In some cases this isn't possible because the process doesn't run as root and wont have permission to do so, or there is no logical setup that can be formed (like apache, which is per website specific).

And the first one to claim this is a security risk needs to rethink things, because distributions will put a default configuration anyway. If the default configuration can be potentially dangerous, easy, next time an administrator logs into the computer, tell them that the configuration was recreated for the app, and for safety reasons, they need to either click [re-enable service] or [Keep service disabled for now]. Keeping it disabled allows them to review the configuration before reenabling it. Or, create the file, and give it a .sample extension. Don't be stupid though and just assume it should be there and bomb out.

File bundling




One highly elegant thing apple does is create bundles. Bundles are simply self contained folders which contain all the data relevant to that item, and are differentiated from normal folders via an folder extension (similar to a file extension, except is a folder)

So, an example game for instance in linux is currently

/opt/local/bin/portal
/opt/local/lib/portal.lib
/opt/local/etc/bleh/lib
/opt/local/share/portal/gfx/001.png
/opt/local/share/portal/gfx/002.png

etc.

So a user needs to navigate to either the application menu, or /opt/local/bin/portal to start the game. Also, note that there is no elegant way to move the application onto another computer. The data is not self contained and you cannot simply copy /opt/local, because there could be thousands of applications in there. There are probably a few other libs in the directories

Apples solution is to rearrange everything into something similar to
./portal.app/bin/portal
./portal.app/lib/portal.lib
./portal.app/etc/bleh/lib
./portal.app/gfx/001.png
./portal.app/gfx/002.png
./portal.app/contents.plist (contains what files the application opens, the type of bundle it is (which is obvious anyway), and the position of the application binary relative to the root of the bundle (so "./bin/portal" for instance))
./portal.app/sources.apt (holds the mirror the updates are stored on)

The game is now a portable directory and can be shifted around, if you want it in your home directory, you can just move it there. Executing the file acts as follows:
1) User selects top level of bundle, and double clicks, the directory has the icon of the file or/and type of file the bundle holds
2) Loader reads description.plist, and identifies where the binary is located (at ./portal.app/bin/portal)
3) Loader executes binary.
4) Program reads libraries from external library bundles (which cannot be moved around)
5) Program reads internal files using ./portal.app/*, NOT hardcoded /Apps/portal.app/*.

The main difference is that the user can now just double click the portal.app directory, and the game is automatically opened. So there are less levels of directories they need to go through and mess around with.

This is definately doable on linux as Gobolinux does something similar. If we could standardise the extension names being used for bundles, there wont be any issues.

You could self contain libssl now as:
/libraries/libssl.library/* (bin, libs, man, etc)

or even stored webpages as something like:
google.webpage/Index.htm
google.webpage/me.jpg

Its all clean and self contained, and you just need to implement an option in the context menu to peer inside it. Some changes would need to be made to console to allow paths to work though (unless you maintain symlinks to all the core libs in a central location for instance.

Package Management based on bundles

Read the post on bundles first, if you are not coming from an OSX background


You can use package management with a bundle type of system easily, in fact, in a better way then you can with normal files. The failure to exploit hidden files in bundles is where apple has failed in terms fo bundling.

Its easy to do. And you can also overcome different architectures by using different contents.plist files for different archs (which should be based on the linux kernel arch)


1) Scan over the harddisk structure (or use file monitoring) and find all sources.apt files in bundles.
2) The sources.apt will just be a list of apt sources with updates for the program
3) Scan over all the files again and find all bundles, scan their contents.plist to grab a list of all the files, their location (in case there are multiple copies, and their versions. Or isolate this process to certain directories.
4) Scan all the Apt repos to find updates for the bundles
5) Offer them to the user, or allow a user to right click a file, and select an update option in the context menu.
6) Update the files in their place from the mirrors.


Its a more dynamic form a package management, as many files can be now moved around. Library locations should be standardised though, and kept there, or else serious problems will occur with program loading. And, the repos are decentralised. Finally, you can right click any compatible bundle to update them and their dependencies.


To delete programs, you could drag the bundle to the recycle bin, it could read the contents file, and handle a proper uninstallation if the user wishes.

Cross architecture and Cross-OS binaries.


Read the section first on "OSX like file bundling" before reading this.

Bundling files in the way dictated above not only benefits users via one click, but makes cross architecture binaries possible/easier.

Lets use the above example again. "/portal.app/bin/portal". We can now enchance it a bit to make it cross architecture and OSX compatible

/portal.app/bin/portal.ppc
/portal.app/bin/portal.x86
/portal.app/bin/portal.amd64
/portal.app/bin/portal.osx
/portal.app/description.plist (used by OSX)
/portal.app/contents.x86
/portal.app/description

OSX has its own way of dictating how its bundles are, so we have to keep the description.plist for them. For linux, we can simply add other descriptions to the file which points to different binaries. It saves the end user having to select the correct binary off the CD for their architecture. Also, some users dont realise that even though they have an athlon 64, their current distribution is set to 32bit mode. This would make that selection automatic. It also could allow schools and such to deploy the program via a drag and drop operation onto hundreds of mixed linux and OSX computers in a lab, without being concerned about compatibility.

Probably the biggest argument against this is that it makes things bloaty. It is true, no doubt about it. However, on CD's it could be just used for the installer file for instance, and the seperate install files could just install the correct architectures binaries (as a bundle but without the other architectures attached to that bundle). In many cases though, such as games, the binaries tend to be relatively small, and the resources, which can be easily shared are massive . One way to keep the system bloat free, would be to provide a stripping application that could rip out the other architectures.

I think its safe for anyone to say that the ability to keep a single skype application on a memory stick, and pass it along a group of people without being concerned about who has which type of computer would be very neat. As opposed to now where multiple binaries must be placed on the stick. End users generally aren't that smart, and many don't know the difference between OSX and Linux, so making cross platform applications would ensure that they don't need to be concerned with what they are running. They would just need to know how to click an icon on a memory stick. Getting this working would be more about politics then anything else. Provided we could convince the major desktop environments to support file bundling for gui applications, this will easily work!

Plus, 1TB harddisks are already available. Anyone who honestly believes that this change will create even the smallest dent in performance, frankly is a ricer, and should stop using 5.25" floppies. Any harddisk (cheap and good) can easily handle recursing into an extra directory, and parsing a contents file. And when solid state storage goes mainstream, it will be even less concern.

If we want to go even further, someone could even code an Application bundle loader for windows, to allow cross windows, OSX and Linux binaries. Since you may need to override directory loading on Windows, it may be implausible.

Clean up the directory Structure using Bundles

When you use the package management system (as described above), you also open up the possibility of changing the directory structure to something decent.

At the moment, its something like:

/bin/app1
/bin/app2
/usr/local/bin/guiapp1
/lib/libapp1
/lib/libapp2
/lib/guilib
/share/man/app1
/share/man/app2
/usr/local/share/man/guiapp1
/usr/local/share/lib/somethingrelatedtoguiapp1butdoesn'tsayit

Thats great. Now please, pick up guiapp1 and drop it on another computer. Thats right, you cant easily because its components are scattered all over the harddisk. So you need to look for them. Fantastic job. I have a better idea, how about:


/core/programs/app1.app/bin/app1
/core/programs/app1.app/man/app1
/core/libraries/app1.lib/libapp1


/core/programs/app2.app/bin/app2
/core/programs/app2.app/man/app2
/core/libraries/app2.lib/libapp2




All files installed by app1 and app2 are now in either /core/programs/app.app or /core/libraries/app.lib We are treating these are core system files, so these are ones which can now be easily duplicated, but could not be easily moved. In many cases, its easier to keep console related apps in their same location, but hide them away. Portable gui apps are more important, so below is an example of this:

/libraries/guilib.library/guilib


/programs/guiapp.app/bin/guiapp1
/programs/guiapp.app/man/guiapp1
/programs/guiapp.app/lib/somethingrelatedtoguiapp1butdoesn'tsayit

To run the program, you just need to click guiapp.app, and boom, the computer will work out where the binaries and parts are.

And, you can now drag and drop the guilib.library onto another computer, and drag and drop guiapp1.app to another computer. Easier.

In fact, you could even avoid needing to drag guilib.library by statically compiling it, or including it in guiapp.app, and then automatically copying it to the libraries location on the first load of the program, in case any other programs need it. Uses up more memory, but means that people no longer need to worry about the harddisk structure

Network mapping built in and notification of bad network security practices

Gnome and Kde should have a means of showing their interpretation of the network. This should include elements such as known switches, internet gateway, incorrectly addressed traffic, external wireless routers and wireless traffic.

This would allow network technicians to see if a hub is being used instead of a proper switch which properly addresses traffic (hubs are a security risk) and even if PPPOE isn't properly set up (as it will show traffic coming coming all from a single external IP). Microsoft has a limited subset of this implemented. Apple hasn't got any of it. If Linux had it, many network technicians could avoid using a packet sniffer for simple issues potentially.


The best part is that having an intelligent part of accessing this data could help notify users if other users on their personal network may be able to see their data. While you can never guarentee the security of the internet, many users wouldn't even be able to recognise how insecure they are on a public wireless network for instance. If you see traffic addressed to another IP on your computer, it means that its likely the network is insecure.

Etherape is an example of a tool which shows a minimalist example of network mapping.

Automator-like tool for Linux



In Mac OSX Tiger, Apple introduced a new tool to allow end users to easily create macros which are capable of performing a task repetitively. With automator, users drop actions onto a diagram, and Automator will automatically pipe data between the actions.

Examples of automator usage can be found on http://developer.apple.com/macosx/automator.html

Any solution which is created should work as well on one Desktop environment as it does on others.

The clone could also be embedded into applications, with custom actions, so photoshop for instance could have custom actions that affect the current image like cropping, or despeckle, etc. And the Application produced at the end could be a perl script (but preferably something which can perform actions in the gui like show dialog boxes, and asking for files to open).

Yes perl scripts dont take much to code, but we could make them even easier to code (and more friendly) by dropping them in a diagram based script, and asking for a file in the gui before it, passing it to the perl script instead of screwing around in command line.

Time Machine for linux


(Screenshot of Mac OSX Leopard - Time Machine from http://www.peachpitcommons.com/?p=223)

OSX supports Time Machine, a clean & easy to use backup solution that allows you to treat a harddisk like a CVS repository, limited only by destination HDD size. It has flashy graphics to give you a good representation of what a supported application or directory looked like on a previous day. And, the big plus is that you can restore the OS back to a harddisk, how it appeared on a certain day (so you could have 5 computers set up with a config showing the computer on every sunday for the past 5 weeks). At my previous place of work, I used this to create a copy of the server so there was a testbed for our new technicians to play with.



(Screenshot from http://en.wikipedia.org/wiki/Shadow_copy)

Windows supports Shadow Copy. Similar to Time Machine but without the flashy graphics, and unlike the claims many Apple users are happy to make, also integrates cleanly into applications. Unlike Time Machine though, it does not seem to be able to install a fresh copy of the OS from a certain day, only revert it back to a certain day.

What does linux support? Rsync maybe? Great fun, if anyone can ever remember the damned cron and rsync commands. Seriously though, Linux needs something similar to Time Machine or Shadow copy. If I want to crop my ex-wife out of a picture, when we hook up again, I should be able to just as easily revert the image back to the way it was. Or if I did it to an entire photo album, I should be able to select the whole album via gui, and within a few clicks, be able to revert all the pics in that album back to a certain date.

If it is properly implemented and integrated, it could even prevent programmers needing to run private home cvs systems to track their code too for smaller projects.

In fact if people wished to go overboard, you could even allow forking of restorations, so that you can have a few different sets. Like you set up two different types of servers on 10/10/10, one used Mysql, the other one has oracle. You can change between them by going to the other side of the fork (at the expense of not having a consistant DB, but anyway, its an example).

Apple's implementation is quite elegant and efficient. It uses dated directories as the base of each backup, backs up each file only once onto the drive, and then future revisions use hardlinks to the already backed up file. This way each directory appears to contain every file from a certain day, but much less space is used. The disadvantage? Depending on the way checks are implemented, if one rarely modified backed up file gets infected by a virus, all revisions are infected. However, it makes restoring a day as simple as copying a directory back and restoring the hardlinks.

Flashy graphics are not a requirement. We can take Microsofts serious "just get this done" approach, which wont woo users, but better then having something overly complex which may end up jamming up a few systems.

Security and stability centre

Just how secure is linux?

Microsoft's attitude is that they acknowledge there are flaws, and have therefore implemented a security centre which shows the status of auto-updating, firewall, security settings, and checks for Anti-virus. They also provide a free anti-malware program which is updated often.

Apple's sales pitch is generally to claim they are secure. Anyone buying a Mac will often be told that "Macs cant get viruses" or "OSX is much more secure then other operating systems" or even go as far as saying "OSX is based on open source, so its audited by lots of developers" and so as such, Apple's security centre only is for encryption, screensaver lockouts and such. If apple started providing Anti-virus, it would freak people out as they would no longer be in the mindset that OSX cannot be hacked

Now, what is Linux's position? Its hard to say as there are hardened distro's, rootkit detectors, security auditing program, but thus far, none of the major desktop environments offer a security centre to identify an insecure setup.

What we need is an application which centralises security. I'd like to think of it as "Security and Stability". It should monitor the following:
- Firewall status. No IPtables enabled = insecure
- Show all security updates. Users should be informed when there are updates available specifically targetting security.
- User rights. If the user is running as root, they should be told.
- Anti-virus. There should be integration with Anti-virus here, or a one click means of listing various versions of anti-virus. Integration should also allow a one click option to start scanning, and an indication if automatic scanning is enabled

- Rootkit detection. There are lots of Rootkit detection systems out there. I suggest that users be able to click a button and run a quick test.
- Permission checker, this was covered in detail earlier. You should be able to run a scan from here.
- Identify if your network is broadcasting everyones traffic to everyone (ie, hubs, not switches).
- Anything else.

Microsoft has the right idea for this. A clean and easy to understand interface is required for this. Otherwise, we will end up with a bunch of computer lusers who happily turn off their firewalls because "they are running linux", without anyone telling them that its a bad idea or the risks. Nobody likes to put up with them 3 months later when they haven't upgraded their openssh for months, and a hacker has wiped out their kernel, and they want someone to help them fix it.

If we want to support everyones needs, this is a must. We can avoid the troubles Microsoft has had in the past by doing so.

Lets not cancel installations because the package manager is locked



At the moment, installing a package is sometimes like:

1) Install package
2) While package is installing, run another instance of the package manager
3) The second package manager just commits suicide with a "repository is locked" error. If you're lucky it asks you to click the retry button. Generally it will just error out with a lame message.


WTF. Seriously, change it so that the second package manager simply waits for the first.


Nobody likes programs which quit with locking errors. And yes APT, my friend, you locked your repository for me yesterday. I knew what was wrong immediately, but I know plenty of people who wouldn't (locking error eh, does that mean you have locked up?). I love you APT, but sometimes, its too much :-(


Windows apps does this same thing. Many apps will just show an error, requiring you to press retry. Apple's apps on windows, decide to be user friendly and crash instead during installation in the case there are locks, so you have no idea.


On OSX however, its a different story, when programs are installed via Apple's package management format, they will wait for each other. It makes remote administration easier (ie, you can have 4 administators logged in at the same time installing stuff, without 3 installers stalling or crashing, they simply will wait for their turn).

Seperation of configuration files in user directories

User directories on linux are easily a mess at the moment. They need more standardisation and more sense to remain clean. If you go to terminal/bash you may discover that your directory looks something like:



/home/auzy/Documents
/home/auzy/.Azureus (Hidden)
/home/auzy/.gnome2 (Hidden)
/home/auzy/Desktop
/home/auzy/.bashrc2 (Hidden)
/home/auzy/readme.rtf
/home/auzy/.Trash (Hidden)
/home/auzy/iffy.rtf
/home/auzy/delete me.rtf
/home/auzy/argggggg.c
/home/auzy/fgdhgfdhd.txt
/home/auzy/Music
/home/auzy/.ooffice (Hidden)
/home/auzy/friendsassignment.c
/home/auzy/friendsassignmentCopy.c
etc.


While you may note that anything with . is hidden normally, what if someone wants to delete the settings for a program? They need to manually unhide it, and sort through the dozens of directories in the home directory to find it. The problems with this is that:

a) Its messy, and certainly not a clean solution
b) Users cannot easily access their settings.
c) Everyones home directory is normally trashed with hundreds of other files, making it difficult to navigate.
d) Its not standardised.


A better way of organising the home directories would be something like:
/home/auzy/Documents
/home/auzy/Desktop
/home/auzy/readme.rtf
/home/auzy/iffy.rtf
/home/auzy/delete me.rtf
/home/auzy/argggggg.c
/home/auzy/fgdhgfdhd.txt
/home/auzy/Music
/home/auzy/friendsassignment.c
/home/auzy/friendsassignmentCopy.c
/home/auzy/Settings/
/home/auzy/Settings/org.Azureus.Azureus/*
/home/auzy/Settings/org.gnome.gnome2/*
/home/auzy/Settings/com.sun.ooffice/*
/home/auzy/Settings/org.bash.bash2/bashrc2
/home/auzy/Trash/*

The advantages? Simple

a) All user settings are now centralised in a single directory
b) Your user directory can be as messy as you want, and you will still be able to find your settings easily
c) Because settings are stored in ., you can easily find settings which all relate to the same company. Whilst at first this appears to make things harder, its not. Because it means that if you wish to change from KDE to gnome, you can now easily delete all com.kde.* settings, and they are gone.
d) No more searching for program preferences. You know exactly where your personal settings for a program are
e) No longer any need to hide them, because they are already cleaned up, and wont make the users directory appear messy.
f) Makes it easier to wipe the settings for specific programs, no need to unhide, and search everywhere, or go to faqs. You can easily guess now.
g) Finding something in your home directory is less painful with ls.
h) You can now use file hiding in your directory for useful purposes. Cant think of any though.

From experience dealing with Apple (who uses this method/or a very similar method), dealing with Microsoft and dealing with Linux, Apple's method is easily the most effective.

Unfortunately, it may require a few modifications to programs to support fully.

Untested programs will still work, but put settings in the wrong spot. You can still create a symlink to those directories though easily with 1 line of code, to support both concepts, so nobody loses.