Most Popular Posts

Dec 7, 2010

Free Google Apps Mail Notifier (version 0.1)

If you have a Google Apps mail account, you may need a desktop tool that periodically checks your mailbox and notifies you when there are new messages. Keeping a browser window open at all times and manually refreshing it is really annoying.

Google has a such a tool, Gmail Notifier, but unfortunately it only works for regular Gmail accounts (@gmail.com) and not for Google Apps accounts (@yourdomain.com). Search engines find several independent implementations of such tools, but I have two main problems with all the tools I have seen:

  • None of those tools is free. (I don’t pay for the mail service itself. It’s ridiculous to have to pay for the notifications.)
  • None of those tools has its source code available, i.e. I cannot be sure what the tool does with my credentials

Therefore, I decided to write a Google Apps mail notification tool that is free of any charge and that is distributed as source code, so you can see what it does before you start using it. Feel free to adjust the source code (within the limits of the BSD license) to suit your needs better. Here is how to build the tool:

  1. Make sure you have .NET 4 installed.
  2. Download the source file. Review the code to make sure it doesn’t do something you don’t like.
  3. The tool needs two icon (.ico) files – your_alarm_icon.ico and your_quiet_icon.ico. The former icon is shown when you have unread messages and the latter one is shown when there are no unread messages. You have to provide these two files yourself. That is not as scary as it sounds – just search your system drive for ‘.ico’ and you’ll find plenty. Copy the two you like best and name them as required above. You will be able to replace them later if you want to without having to recompile the .exe, because they are loaded dynamically. Note: These two .ico files must reside in the same folder with the .exe. 
  4. To compile the source code, run the following command in a plain cmd window from the same directory where you saved the source file. (The entire command and arguments must be on one line):

%windir%\microsoft.net\framework\v4.0.30319\csc /t:winexe /o
/r:mscorlib.dll /r:System.dll /r:System.Drawing.dll /r:System.Windows.Forms.dll /r:System.Xml.dll 
FreeGoogleAppsMailNotifier-0.1.cs

Now you may run FreeGoogleAppsMailNotifier-0.1.exe. If you decide to copy it somewhere, remember either to copy the two .ico files along with it or to supply alternative ones.

Since this is the very first version of the tool, it is possible that there are some bugs. If you find one, please report it here as a comment. Thanks.

Dec 3, 2010

Everybody Is a Programmer

When I was a young developer, I used to treat my programs as kids. Now that I am an old developer, I treat my kids as programs.

Oftentimes we don’t realize we are programming our kids by how we act, how we speak, how we interact with other people, or even by not being with our kids. Kids compile the code they receive from us and (God forbid) from other sources, and start executing it.

As it is with computers, we don’t really know what we’ve programmed them to do until we let them run. Also as it is with computers, finding and fixing bugs is a long and painful process that is not always successful. Unlike it is with computers, you cannot scrap the project and start over.

Therefore, remember that you are a programmer. And be a good one.

Oct 15, 2010

Meet the Coach

As my third season as a recreational soccer coach is wrapping up, I am starting to prepare for the next season. It is time to record what I provide and what I expect as a coach, so parents can read it in advance.

My Motivation

I am a volunteer. I am not paid by the association, nor do I get any discount. As a parent I pay as much as every other parent does.

I do this, because I want my kid to advance in the game. We spend a lot of time practicing one-on-one, but soccer is a team sport. A player should ultimately practice as part of a team competing against another team. Furthermore, the better a team is, the more individual players are challenged, and the more they advance. That is why I want to build a good team.

What I Provide

I teach. And I am serious about it. I value our time together. I do not waste my time trying to entertain kids who are not motivated. I teach kids who want to learn.

I teach both individual skills and team tactics with roughly equal emphasis. I also promote physical fitness and proper warming practices. The first half of each practice consist of a warm up and drills, and the second half – of a short game. If a kid is late for practice, he may skip a drill, but he will not skip the warm up. Running is not a punishment – it is a prerequisite.

Lastly, I set the number of practices to the maximum allowed by the association in order to maximize the value for kids.

What I Expect

My expectations are not from the kids I coach, but from their parents.

I expect parents to be interested in the advancement of their own kid, and to actively participate in the practices. I greatly appreciate help.

I expect parents to bring their kids on time for practice and especially on game days. Please allocate at least 10-15 minutes for warming up before games.

I expect parents to tie their kid’s shoes before practices and games. I understand this is the age when we teach our kids to tie their shows, but there are occasions when we have to make exceptions.

I expect parents to bring their kid’s ball to practices. If you bring your kid without a ball, you are not leaving me too many options how to keep him busy.

Game Days

A roster has twice as many kids as there may be on the filed at the same time. The association has a rule that each kid is entitled to 50% of the game. I interpret that rule as: each kid is entitled to 50% of the game while the kid is present at the field. That means if a kid shows up for the second half, he will still play 50% of that second half, not the entire half.

I let kids play in the order of their arrival at the field. If you want your kid to start, come early.

Games are more intense then practices. Therefore kids need to warm up both physically and technically. Allocate 10-15 minutes for that and bring their ball.

Play with Your Kid

My observation is that the kids whose parents commit time to play with them, advance most. You need not have played soccer in order to help your kid make progress. Even taking your kid to the park to watch a live game between older kids or adult amateurs will inspire him to learn. Try it.

Sep 11, 2010

Active Software Specification (a.k.a. Test-Driven Development)

I used to ignore Test-Driven Development (TDD) for years. For some reason it sounded to me like “test development” which was not cool. Just recently I started suspecting that my view of how software products should be developed might be inline with TDD. So I took a formal class and not only was I happy to discover I had come up with the same concept independently but I also got my view clarified and taken forward.

The reason why I am writing this article is to share this knowledge with my fellow developers, because it has become evident from my interactions with other developers in my organization that while the term "test-driven development” is quite popular, its philosophy is largely misunderstood. My goal is to get the people involved in software development to rethink who should do what in the software development process.

Specification Driven Development

A typical software organization has its product development driven by some sort of a specification. That could be a text document, a prototype, a slideshow of screenshots, or a mix of those. Whatever form the specification takes, its function is to project what the product should look like. The main problem with this approach is that the specification is passive and it must be interpreted by humans. First, a developer must interpret the spec and implement his interpretation in source code. Then, the specification is interpreted by a tester who verifies the developer’s interpretation and implementation. Wherever those two interpretations differ, there is a bug filed, and wherever they don’t, it is assumed the specification is correctly implemented. Now we have two, more concrete, problems: 1) the organization churns on aligning interpretations, and 2) the fact a developer and a tester share the same interpretation doesn’t mean that matches the author’s intent.

Things get even worse with time. Towards the end of the release cycle, both the developer and the tester have a pretty good understanding of what the product should look like and specification authors tend to neglect keeping their specifications in sync with the products. The endgame is the time when the public surface endures many small tweaks that are too small to be worth updating the specification, but that are enough for somebody looking at the specification to consider it outdated. So when the next release cycle starts, the team has the dilemma: should they spend the time to put the specification in sync with the product, or should they focus on the incremental difference and specify only that? The latter option is much more attractive is is the typical winner. One more release cycle and the original specification will be forgotten for good. Along with the original spec is lost the ability to track use cases that may get broken broken.

Active Software Specification

Specification-driven development somewhat works for v1, but it gradually falls apart in subsequent releases. The main reason for that is the passiveness of the specification. The alternative would be to write the specification in a different form – one that need not be interpreted and that would not get out of sync with the product. One such form is a suite of tests. A test is not a projection of a product feature – it is an active verification of the feature’s implementation. Notice there is nothing to be interpreted any more.

More specifically, these tests a check-in tests, i.e. any attempt to check in a source code change would be rejected unless all those tests pass. Permissions to modify those tests should be restricted to the architect who authored them and a manager responsible for the final product. If this sounds too rigid, let me remind you this: a broken test means one thing – a broken use case. That should be approved by both a technical and a business owner. As long as these tests live along with the source code, they will remain in sync for many releases. That is the philosophy of Test-Driven Development.

Some Clarifications on Test-Driven Development

My observation is developers tend to avoid TDD due to a fear of “test first”. “Test first” is an assumption that all tests must be written before product development may start. In reality, some tests will indeed be written before any product code is checked in, and the more tests are in place the better. However, it is unrealistic to expect to have the complete test suite ready before any product development may start. Most likely tests will show up shortly before their corresponding product features.

The next controversy is who should write those check-in tests? My answer is: the same person who would otherwise write the specification. After all, these check-in tests specify the product. This doesn’t mean our industry has the tools to aid check-in test writers and reviewers to make a test suit look like a specification, but I hope will get there.

Then what would today’s software testers do? Let me clarify the meaning of this question: if a product satisfy all its check-in tests, then it is in full conformance with its architect’s intent; is there anything further to be verified? Yes, there is – the overall usability of the product, i.e. what applications can be implemented on top of this product (if the product is a component), or what operations can be performed using this (if this is an end-user product?) That is what today’s software testers should do. And I would call those people “application developers”. The result of the their work should be made available to customers either as free (but supported) utilities, or as commercial products.

Conclusion

I have authored specifications, and I have experienced the difficulty with keeping a specification current. I can point to a large number of outdated specifications, and I believe anybody can too. I continue searching for an alternative to specification-driven development. Regarding Test-Driven Development, it seems very promising - I have not practiced it as part of a whole organization, but I have adopted it in my own micro world and it works for me. What I find missing is tools to make a check-in test suite look like a specification, or the other way around – to extract tests from a textual specification (with extra visual content), but I remain optimistic textual specifications will start yielding to check-in tests.

Aug 20, 2010

Don’t log me in. Let me do things only I can do, instead.

How do you feel when you log in to an online service – protected or vulnerable? If you feel protected, this article is for you. Read it, and then answer the question again. If you don’t feel logging is a correct way to interact with a system, join me in my quest to educate people to demand better security mechanisms from online service providers.

How a system logs in a user

The login process consist of two steps: 1) authentication, and 2) session fabrication. The outcome of the authentication step is a mere true/false bit reflecting whether the user is who they claim to be. The outcome of the session fabrication step is a session object representing the authenticated user. Let me highlight a few things here: the system can fabricate a session for any user account. (Otherwise there would be users who could not log in.) The system does not need the authentication bit in order to fabricate a session – technically speaking it is only a matter of courtesy - the system contains code and resources sufficient to impersonate any user with or without the authentication bit.

Several things are scary here. First and foremost, password storage - while some systems store only numeric hashes of the passwords, other systems may store the actual passwords. Passwords may be encrypted or may be stored as plain text. If they are encrypted, the system must have the key to retrieve any password (for the authentication step). You can count on the fact that there is a system administrator who can read all passwords.

The second scary thing is the existence of the session fabrication functionality. It is possible that the session fabrication functionality is accessible without going through authentication. Then a person with local access, e.g. a system administrator, can launch a program that fabricates sessions, i.e. impersonates users.

Third, the system performs all the actions. A user places a request for a particular action, but it is the system that carries it forward. That requires the system to be able to process all the data in store, i.e. the system can read anybody’s data regardless of who the session user is. This is scary, because there are users with permissions to see other users’ data, e.g. customer support, management, administrators, etc. They can read your data without doing any hacks.

So far I’ve only mentioned employees of service provider organizations. Consider the millions of professional crackers as well as enthusiasts throughout the world. Once an attacker gets into such a system, they have all of it.

How to solve this

Whatever route the online industry takes to solve this problem, it will take decades to become noticeable. That is mainly due to the fact that online service providers are not impacted as deeply as consumers are. And consumers are hard to unite.

Anyway, is there an alternative to login? I dare say yes. I have not worked out the details, but here is the general idea. It is based on public key authentication - instead of storing a password at the system, a user shares their public key when they create their user account. There are no sessions – each request is authenticated individually. The authentication consist of verifying that the subject request has been signed with the user’s private key. User data is stored as units each of which has entered the system encrypted with the user’s private key. If there is any unencrypted metadata associated with actual data blobs, the system may utilize it in any way it needs to.

Here is how an online email system will look like. A user sends a signed request to view their mailbox. (Notice there is no need of a login.) The system verifies the request is signed with the user’s key. It pulls the list of metadata items – subject, sender, receive date, message size, etc., encrypts it with user’s public key and sends it back to the user. The user’s web browser sees a properly tagged encrypted blob, pulls the user’s private key (the browser may ask the user for the key’s location), decrypts the blob and renders it nicely for the user. When the user clicks on a specific email message, another signed request is sent to the system. In this case the system sends back the already encrypted body of the requested email message. The system can provide additional value by verifying whether each message has been signed with a key of a trusted sender and treat all unsigned messages as spam.

The first step towards that futuristic system is for consumers to start valuing their own privacy – create a private-public key pair, give the public key to potential recipients and demand their public keys in return, start signing all outgoing messages, and start reminding senders of unsigned messages to start signing. Only when a momentum toward user privacy is built, online service providers will consider giving up on their ability to spy on people’s private data.

May 15, 2010

My Bets on SQL

I intentionally separated my bets on SQL, the language, from my bets on relational databases. These two are well decoupled and that’s why I believe they will start evolving separately. Let me address the obvious first:

SQL will become the COBOL of the 21st century.

When I started learning programming in the mid/late 80’s, I heard of COBOL, but I never got to know anyone who was programming in it, nor had I seen any COBOL code. So I, like the majority of my generation, thought COBOL was something ancient that no longer existed. My perception of COBOL turned around in the late 90’s during the Y2K bug madness when large corporations, banks in particular, were recruiting COBOL programmers offering insane wages. It appeared that critical products written in COBOL existed and their customers were not looking to retire them. Now that more than 10 years have passed since then, I still keep hearing about organizations using mainframe computers and running software products written in COBOL.

Given that schools have not trained students how to program in COBOL for at least 20 years, I assume the wages offered to the “COBOL dinosaurs” who are still on the job have only gone up. SQL is slightly different – schools do teach it and developers are aware of its importance to customers, but for some unknown reason the majority of them choose to ignore it as a career path. One may argue that the number of qualified SQL developers is much larger than the number of qualified COBOL developers and therefore a shortage is unlikely. However, data acquisition and data processing are critical for every mid-size and large organization. As the volumes of data that organizations acquire and process grow, so does the need of innovation in this area – the software products that do that are constantly upgraded and refactored. My prediction is that the shortage of SQL developers 20 years from now will be bigger than the shortage of COBOL developers through this past decade. 

My second bet is on the form in which SQL will exist. I see SQL as an early branch in the evolution of functional languages. It is probably the most widely used language where the programmer specifies the desired result and not how to get to it. SQL’s concept is very simple – sets and set operations. If SQL was to be designed today, it would have about 20 operators. That’s what LINQ has. Unfortunately, when SQL was designed over 30 years ago, it was meant to be used by non-programmers. That’s why its operators are combined into a single, verbose, [SELECT] statement. Well, non-programmers never picked SQL. The uniqueness of its syntax made it difficult for programmers too. I admit that having actively programmed in SQL for the last 12 years, I still rarely get away without looking up the documentation. Lastly, each database vendor maintains its own dialect and it is even possible to mix standard and vendor-specific options in the same statement. Given the importance of data processing for our modern civilization, it is time to introduce a new generation of a data query language:

A true functional query language will get adopted by database vendors which database engines will implement natively.

The first keyword here is “functional”. Functional programming and functional languages have evolved dramatically ever since SQL branched off. So a functional language could easily be adapted to sets and set operations. The excitement among .NET programmers about LINQ confirms that. The problem with LINQ is that it is a small band-aid to a big wound – it is a client-side translation of functional expressions to plain old SQL statements. The database servers still expect loosely-typed textual queries. I’m predicting a disruptive change in database connectivity where database servers will expect compiled expressions that represent functional queries.

Apr 16, 2010

My Bets on Relational Databases

Relational database engines emerged because they could process large and complex data sets faster than ISAM engines. Notice the subtle qualification – large and complex data sets. Today relational engines scale down and compete well on small and flat data sets as well, but that wasn’t always the case. For years we, who wanted to use database servers in our work, had to prove to our management that we would deal with data sets large enough to justify the purchase of database software as well as appropriate hardware for it. Younger software developers probably don’t understand that since they can download SQL Server Express or Oracle SQL Developer for free, not to mention the open source database servers. Briefly, database servers have become commodity.

ISAM engines are extinct today, because the pioneers of the database industry foresaw the continuous growth of the data volumes that would need to be processed. While that growth has been steady in absolute measures, it has reached a tipping point relative to the capacity of a single [computing] box. For the first 25 years of their existence database engines were predominantly single-box, because data could fit on a single hard disk and a single CPU was sufficient to process it. About 10 years ago RAID controllers multiplied disk capacity several times and SANs became popular shortly after. At that same time multi-processor architectures increased the processing power of the machines. While that extended the life of single box architectures by another decade, it revealed that the end was approaching.

About 5 years ago a new pattern emerged – Map Reduce. It gives up the ability to execute arbitrary relational queries in favor of the ability to distribute storage across a large number of machines as well as to process those large data sets in parallel. Both proprietary and community implementations of the Map Reduce pattern have been growing and improving their feature sets. The first question that arises is: will relational databases continue to exist?

Relational databases will not go away. They will continue serving their mission which is to execute complex relational queries against large data sets.

The main factor in my bet is the demand for executing complex queries. Our civilization is complex and so it has complex needs. The Map Reduce pattern seems to represent the next generation of ISAM. We’ve already witnessed ISAM yielding to relational databases. In fact, both Google’s and Hadoop’s implementations of Map Reduce include relational engines – Big Table and HBase respectively. The next question is: what would relational database servers look like?

Relational database servers will become relational database clusters with hundreds of storage and relational nodes. Those relational clusters are likely to employ a non-uniform architecture with regard to storage, i.e. each storage node is likely to have a set of dedicated relational nodes that will execute queries against data form that storage node eventually joining it with data from other nodes.

Here are some clarifications. I assume that if a system can scale out to 200 nodes, it can scale out to 2,000 nodes. I cannot guess whether today’s relational database servers will evolve incrementally or whether they will be rewritten from scratch based on a scalable pattern like Map Reduce.

P.S. My bet on the SQL language will follow.

Apr 14, 2010

How to Share a Local Installation of Visual Studio 2010 Help

I don’t want to debate whether you should use a local Visual Studio 2010 Help installation or the MSDN web site. If you have already decided to use a local Help installation, you may want to access it from multiple machines, because it doesn’t make much sense to install the exact same content on every machine you use to develop software.

Why is this use case problematic? With the launch of Visual Studio 2010 and .NET 4, the formats of the local Help and the MSDN have been unified. So the user experience is very similar whichever way one chooses to use help. The local Help installation consists of a static file store exposed through an http agent. The user interface is the default web browser. The problem is that the http agent rejects requests unless they reference the host as ‘127.0.0.1’ or ‘localhost’.

The most obvious solution would be to install a web proxy on the machine where you have the local Help installation. That web proxy should simply redirect all the incoming traffic to the Help http agent. If IIS (Internet Information Server) had a built-in proxy module, I’d probably use it. Unfortunately, IIS only has a redirection module which is not the same and doesn’t solve the problem. So you have to install a standalone proxy. You could probably find a one-screen sample from WCF (Windows Communication Foundation), but you’ll still have to customize it and maintain it. I wouldn’t sign up for that.

What I recommend is a quick and easy way to point a local Help agent to a remote Help sore:

  • Find a drive letter that is available on all machines that will share the same Help installation. Let’s say that’s H:.
  • Share the root folder where you installed the Help content.
  • Map the H: drive letter to that folder on all machines from where you want to access that Help installation.
    • If you want to access it from the machine where you installed the Help, you should map the H: drive letter on that machine too.
    • Make sure you check the Reconnect at Logon checkbox.
  • Modify the manifest\queryManifest.3.xml file of the local Help store as follows:

<queryManifest version="1.0">

      <catalogs>

            <catalog productId="VS" productVersion="100" productLocale="EN-US" productDisplayName="" sourceType="index">

                  <catalogPath>H:\catalogs\VS\100\EN-US</catalogPath>

                  <contentPath>H:\content</contentPath>

  • Set the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Help\v1.0\LocalStore registry key to “H:” on each machine from where you want to access the Help installation.

Apr 7, 2010

My Low-Sodium No Sugar Flax Bread Recipe

I’ve been making bread at home for about 10 years. I began experimenting with my own recipes a couple of years ago. Now I can consistently produce a delicious bread. My objective has been to find the optimal balance between healthiness and taste. In particular, I’ve been trying to use minimum refined carbohydrates and salt. Below is my “optimum” recipe followed by some variations with reinforced and with relaxed constraints.

Ingredients

  • 2 1/4 cups water
  • 1/2 tsp salt
  • 1/2 cup ground flax seed
  • 2 cups all-purpose unbleached flour
  • 2 cups whole wheat flour
  • 1 1/2 tsp yeast

I recommend using broad and shallow cups as opposed to narrow and deep ones. My experience shows that while water easily fills out a cup of any shape, that is not true for flour. Using narrow and deep cups may result in an overwatered dough that will have to be baked further.

I use a bread machine for both knitting and baking. Feel free to bake it in the stove. The bread will come out better.

Steps

  1. If you keep the yeast in the refrigerator, put 1 1/2 tsp of yeast into a larger container, e.g. in a tbsp, and let it warm up to room temperature.
  2. Pour 2 1/4 cups of water in a larger bowl and microwave it for 1 minute. It should be just warm, not hot. Pour that warm water into the bread machine’s bucket.
  3. Add 1/2 tsp of salt.
  4. Add 1/2 cup of ground flax seed.
  5. Add 2 cups of all-purpose unbleached flour.
  6. Spread about a third of the yeast over the flour. Make sure you don’t drop it in the water - that would be a waste.
  7. Add 1 cup of whole wheat flour.
  8. Spread about half of the remaining yeast over the flour. Make sure you don’t drop it in the water.
  9. Add 1 cup of whole wheat flour.
  10. Spread the rest of the yeast over the flour. Make sure you don’t drop it in the water.
  11. Use the longest program on your bread machine. That’s usually “French Bread” and it takes about 4 hours.

Deviations

More Whole Wheat Flour

To make the bread healthier, replace 1 cup of white flour with 1 cup of whole wheat flour. The bread will not rise as much. It will also need a little bit (1/16 cup) more water. If you still like the bread, you can replace the remaining cup of white flour with whole wheat flour following the same rule. In that case you may also need to increase the salt to 1 tsp to make it rise a bit more and to improve its taste.

More White Flour

If you like puffy bread, you can replace 1 cup of whole wheat flour with white flour. When I did that, my bread got out of the bucket. So if you want to use white flour exclusively, you should bake it in the stove.

Feb 18, 2010

How to Make Your Linux Desktop Look Better than a Mac

The Linux distribution I use is openSUSE 11.0 with a KDE 3.x shell, but the procedure below should work on any modern Linux distribution with either KDE or GNOME shell. As a matter of fact, I was reading posts from the Ubuntu forums while I was trying to enable it on my openSUSE. I was also able to reproduce the procedure on KDE 4.x.

EnableEffects-Shake

Background: What Is What

Compiz and Compiz Fusion

Compiz is a low-level platform that enables the desktop effects we’ll be leveraging. Compiz Fusion is an upgrade on top of Compiz that implements a large variety of effects. While Compiz Fusion has merged into Compiz, the tools that manage them are still separate. I encourage you to read the Compiz wiki if you want to get the best out of your desktop.

Emerald

Emerald is a window decorator that works on top of Compiz Fusion. The Compiz wiki claims it is no longer supported, and Compiz Fusion itself should be used instead, but Emerald seems to be working better then the default Compiz’ default window decorator. Besides, the Mac themes I’ve found require Emerald.

KDE/GNOME

For everything else – dialog box controls, scroll bars, icons, etc., you still need your existing GUI shell to do the ground work.

Step 1: Enable Compiz Effects

Use the Distribution page on the Compiz wiki as a roadmap. It references the corresponding pages on the most popular distributions’ web sites that explain how to enable Compiz effects on each of those distributions. Following is my digestion of those procedures. Make sure you understand each step and perform it successfully.

Update your video driver

Before you start messing with graphics, make sure you have the latest driver for you graphics card. For openSUSE, there are one-click installs for both NVIDIA drivers and ATI drivers.

Install Compiz packages

Go to your software management tool. On openSUSE that is YaST. You’ll find it at: Start menu > Utilities > System > Administrator Settings. Within YaST click on Software > Software Management. Search for ‘compiz’. Read the descriptions and add all the packages that seem to be runtime our themes. Skip the ‘…-devel’ packages.

Enable Compiz effects

On openSUSE you’ll find this tool at: Start menu > Utilities >  Desktop > Desktop Effects. On other distributions, search the Start menu for ‘Desktop Effects’. You’ll see a dialog box where the only control you can change is an Enable desktop effects checkbox. Check it. Now you can choose a pre-built configuration from the drop-down list. I’ve chosen Hollywood got nothing.

EnableEffects

You will see some effects, but your desktop may become a little messy - window control buttons may not get drawn, and if you logout, desktop effects will get disabled again. So don’t stop here. Keep going.

Choose Compiz as a window manager

On KDE 3.x, open Personal Settings. (Search for it in the Start menu.) Navigate to KDE Components > Session Manager. Select Compiz from the Window Manager drop-down list.

WindowManager

Log out and log back in.

Note: If your task bar disappears, don’t panic. The task bar exists but for some reason it doesn’t get drawn. What fixes it is this procedure: right-click where the task bar is supposed to be and select Configure Panel from the context menu.Change task bar’s location on the screen and click Apply. Then put it back where you want it to be and click Apply again. Log out and log back in. If this didn’t work the first time, try it again without putting the task bar back to its place. You’ll do that separately.

Enable and configure Compiz Fusion effects

On openSUSE you’ll find this tool at: Start menu > Utilities >  Desktop > CompizConfig Settings Manager. On other distributions, search the Start menu for ‘CompizConfig Settings Manager’. You will see many icons with checkboxes next to them. You enable an effect through the checkbox and configure it through clicking on the icon. For additional help, use the CCSM page on the Compiz wiki.

The important step here is to navigate to Preferences and to set Backend and Integration to Flat-File. Logout and log back in after setting this.

Note: Some Compiz Fusion effects may conflict with base Compiz effects. In those cases a message pops up to warn you. I haven’t tried to override base Compiz effects with Compiz Fusion effects. Feel free to try it at your own risk.

Step 2: Make Your Desktop Look Like a Mac

At this point you should have a desktop with Compiz effects and the windows should be decorated according to your existing KDE/GNOME theme. (Actually, my windows weren’t properly decorated.)

Install Emerald

Just like you installed Compiz packages, search your software management tool for ‘emerald’ and install anything that seems to be either runtime or themes. Skip ‘…-develop’ packages.

Enable Emerald

Open CompizConfig Settings Manager. Click on the Window Decoration plug-in. In the Command box, enter ‘emerald --replace’. Logout and log back in. Now you should have a default Emerald theme enabled and all windows should be properly decorated.

Download Emerald themes

Go to http://kdelook.org/ and search for ‘OS X’. Alternatively, you may search http://gnome-look.org/ or http://compiz-themes.org/. (I believe those three sites share the same repository.) I have downloaded these two themes:

Feel fee to download and try more themes. Emerald theme files are gzip archives with a .emerald extension.

Manage Emerald themes

On openSUSE you’ll find this tool at: Start menu > Utilities >  Desktop > Emerald Theme Manager. On other distributions, search the Start menu for ‘Emerald Theme Manager’. Click on Import and select a downloaded theme file. The new theme should show up on the list immediately.

To select a theme, simply click on it on the list and it will take effect immediately. I’ve chosen Mac OS X Ubuntu Air. When you are done, click on Quit.

Conclusion

There are still components of your desktop that Emerald doesn’t touch – buttons, icons, scroll bars, splash screen, etc. Keep searching http://kdelook.org/, http://gnome-look.org/, and http://compiz-themes.org/ for KDE/GNOME/Compiz themes that make those components look better.

Feb 16, 2010

Quote Me: You Can Only Become So Good

    You can only become as good as your mentors have taught you.

I don’t deny that genetics play a role in how much and how fast a person can learn. But in order for a person to learn, they need some outside material to learn from. And it is the quality of that outside material that develops the brain. I emphasize on the word outside, because one can hardly challenge himself without a spark from the outside. That outside environment is primarily the people you learn from as well as the problems you are challenged with.

So keep asking yourself: Are you learning enough? Are you being mentored by good people? Can you improve the quality of your learning environment?

Feb 15, 2010

About Me @ Microsoft (part II)

Since November 2009 I’ve become a software development engineer (again) on the Parallel Computing Platform. I work on the .NET side of the platform, a.k.a. PFX. Our team blog is the most extensive source of information on PFX. I recently contributed a post there - Maintaining a Consistent Application State with TPL. If you are interested in the subject, please take a look and let me know what you think.

If you want to get started with Parallel Computing or you want to find a structured reference of the current state of PFX, please visit the Parallel Programming in the .NET Framework page on MSDN.

If you have questions on PFX, please post them on the PFX forum and we’ll be happy to answer them.