Samstag, 11. Februar 2017

Comparing renting versus buying

If you are happy renting a place for $2000 a month, does it make financial sense buying that $1,000,000 home down the street?  Let's compare the two scenarios.

Scenario 1: renting for 30 years

Assuming:
  1. Rent is $2000 a month now, growing at 5% a year.
  2. The stock market is growing at 10% year.
Renting for 30 years costs roughly: sum(24000 * 1.05**i * 1.1**(30-i) for i in range(30)) => $7M.

Scenario 2: buying a place and living in it for 30 years

Assuming:
  1. Buying a place for $1M leaves annual payment of 5,368.22 * 12 = 64,418.64.
  2. Mortgage interest of 5%.
  3. House is appreciating at 5%.
  4. Taxes are 1% and going up by 2% a year.
Over time:
  1. Cost of the house comes out to: sum(64418.64 * 1.1**i for i in range(30)) => $10.6M.
  2. The house is now worth 1000000*1.05**30 => $4.3M.
  3. Taxes: sum(10000 * 1.02**i * 1.1**(30-i) for i in range(30)) => $2.1M

Montag, 7. September 2009

Cloud computing vs. traditional hosting

Over the course of the last year, I've had a few conversations with people about the difference between cloud computing and traditional hosting. I've come to realize that cloud computing can really be thought of as a natural evolution of the hosting industry, where traditional hosting was the first step. A cloud computing provider usually supports a much wider variety of services on top of traditional hosting. This article does a good job explaining some of the differences. Perhaps the following analogy is appropriate: buying the services of a traditional hosting provider is like renting a set of electricity generators, whereas a cloud computing provider provides an electric power grid. The idea is that a cloud computing provider makes it extra easy to treat computing resources like a pay-as-you-go utility service.

What then are the extra services that takes cloud computing one step beyond traditional hosting? Here are a few:


  • Load balancing: a flexible set of computing resources can all be transparently load-balanced behind a virtual router.

  • Content Delivery: the work done by a content delivery network can all be done transparently by a cloud computing provider, such as Amazon CloudFront.

  • Failover: if your data is load-balanced between multiple data centers, and one data center goes offline, a cloud computing provider should be able to failover the traffic from one to the others. (If they can't now, they should provide this service. :))

  • Scalability: the number of compute resources used should automatically scale up or down based on demand.

  • Tooling: with the advent of mainstream cloud computing providers, the tooling has improved tremendously. For example, Windows Azure tools allow developers to test everything out locally and easily deploy to the cloud, and thus enabling people to easily build scalable services.



Last, but not least, everything above is geared towards businesses. What about cloud-computing for consumers? In that space, it's the applications that drive a platform--applications such as Microsoft's Live Mesh, ubuntu one, etc.

Montag, 19. Mai 2008

Response to smithj´s post regarding update schemes

smithj stated that my appliance update proposal rests on the assumption that configuration data can be easily separated from the operating system. This is true and I believe it is doable.

First, let me define what I mean by configuration. We do not have to treat all files in /etc as configuration data. For example, if my appliance ships with samba functionality, but it does not allow configuration of samba by the end user, then for all intents and purposes, from my appliance´s perspective, /etc/samba/smb.conf is not a configuration file. Configuration data corresponds only to settings that the ISV allows the end user to change. (This should be a very limited set by the way. If I had a hundred dials on my microwave oven, I´d never figure out how to use it. The same goes for TiVO--the limited number of user-configurable options works.)

The only configuration that an end user should be performing on an appliance is through the appliance web interface. Since the ISV has full control over all configuration data, it can choose to put it all in a database on a separate partition, which achieves the separation of configuration from the operating system.

smithj´s example of the end user needing to write a cron script is not realistic because most appliance users should not need to know what cron is. However, suppose that the ISV implemented scheduling capability based on cron and needed to configure cron dynamically. Then the ISV-provided program will need to store this configuration in its database for easy queryability, and then write a file out to /etc/cron.d/. These are things the program should do regardless. The only extra step that needs to be taken in the Amazon EC2 world is to redo the write out to /etc/cron.d/ on every reboot, which isn´t very hard. (Note that Amazon EC2´s root file system is not read-only, but you can think of it as a read-only operating system partition with a read-write tmpfs partition layered on top with UnionFS.)

In this world where the ISV manages all configurable data, why not maintain it in a separate location from the operating system and make upgrades as easy as swapping the OS image?

Two additional points to smithj:
- merges are nice in theory but bad in practice because in the event of a merge failure neither the end user nor the ISV has a tool to rectify it.
- you are not the target audience of an ISV. ;)

Microformats

Microformats are an easy way to integrate semantic meaning into web pages. It´s already being adopted by popular websites. Firefox has an add-on called Operator that can parse it automatically.

For example, here is the Digg profile page for jon1012.



Operator parsed the embedded hCard automatically and gave me the opportunity to export his contact information as a vCard.

The snippet of HTML looks like


<div class="vcard">
<div class="vcard-side">
<img class="photo" src="/users/jon1012/h.png"
alt="jon1012" width="120" height="120" alt="" />
</div>
<h2><span class="fn">jon1012</span></h2>
<div class="profile-location">A person from
<span class="adr"><span class="locality">Paris, France</span></span>
who joined Digg on December 7th, 2005
</div>
</div>


Here´s an example with Google Maps.




and wikipedia


Sonntag, 18. Mai 2008

Using Windows CardSpace


Windows CardSpace is a feature of the .NET framework that manages information cards.

One use-case is to associate a card with an online account so you can use this to log onto the site in the future.



When the website requests a card, CardSpace will first display information about the site.



On the first time through, you have no cards. You may create a personal card.





When done, send it to the relying party.



Unlike the personal card created above, Managed cards are issued by a separate identity provider. In that case, import the card.



When a relying party requests a managed card, your computer will in turn request validation from the identity provider. A resulting token with the blessing of the identity provider is sent to the relying party. This is a mechanism by which authentication and authorization can be off-loaded to a third-party.


Samstag, 17. Mai 2008

Amazon EC2 makes traditional updates obsolete for appliances

Linux distributions are obsessed with updates, and rightly so, given the frequent nature of security fixes that must be published. When Linux-based software appliances became popular in the last couple of years, the update paradigm was brought along for the ride. Package manager or system manager software runs on the appliance, and either automatically or at the user´s request updates the entire appliance to a version published by an ISV.

Package managers are big and complex beasts with not a small chance of failing in the middle of an update. Though unlikely, there is also a non-zero chance of such kind of failure being disasterous, rendering the appliance unbootable. Instead of dealing with the downtime and uncertainty associated with updates, the unique nature of the appliance model combined with the power of cloud computing offered by Amazon EC2 renders traditional updates via package managers obsolete.

Before delving into the future of appliance updates, consider how a typical home appliance such as a router or the TiV does updates. These hardware appliances contain special flash ROM that holds firmware. User configuration data is stored in a different location. When a manufacturer releases a firmware update, users download the entire firmware and replace it in one piece. This update mechanism is dead simple and fairly failsafe.

Applying this paradigm to the virtual appliance world, the firmware is the operating system. When deployed on Amazon EC2, the ISV may simply publish all of their appliance versions as templates. Their users associate their own data partition with the common operating system template such that the operating system (firmware) is read-only and all user-specific data is placed in the data partition. When the time comes to update it, the user may simply associate a newer operating system template with the same data partition. If a database schema needs to be migrated to work with the newer OS version, take a backup snapshot of the data partition and run an ISV-supplied migration script first. That was easy, wasn´t it?

R.I.P. traditional package managers. Virtualization ushers in a new way of updating appliances that offers shorter downtime and is less error-prone.

Ubuntu Live from HD

I followed the instruction at https://help.ubuntu.com/community/Installation/FromLinux to install from hard drive using the contents of a LiveCD image. Everything worked flawlessly.