And today, exactly 5 years later, I’m posting the last.
Well, maybe not the last for ever, but I’m certainly not planning any further posts here for the forseeable future.
Some time after you read this I’ll be converting the blog to static HTML and moving to its new semi-permanent home courtesy of Matt Quail, to whom I am much indebted. The painstakingly-chosen domain name will remain and hopefully none of the links will break.
To the regular readers and commenters, thanks so much for making the effort to visit and contribute. I’ll leave the syndication feeds in place, so by all means stay subscribed on the off chance that I publish anything further here. By all means unsubscribe to the comments feed though.
Frankly I’m just not excited about it any more. I am not a naturally gifted writer — oh no stop it, you are too kind — and so each article took a huge amount of careful crafting and ruthless editing before it reaches even my low quality standards. After a while this just started to feel like work.
Other factors were in play also. Blogging as a whole seems to be in something of a decline, so there’s not that degree of inter-blogger interaction which we saw back in the heyday. Fewer bloggers, fewer posts, fewer readers. And for me, just not quite enough feedback or reward.
Also spammers are definitely one the reasons for closing the blog. I’m going to be fucking glad not to have to deal with their shitting in my nest.
I’d much rather close the blog rather than simply abandon it like most people seem to do. I offer no explanation for this.
Here are some posts that I’m particularly proud of. You might wish to revisit them if you’re feeling particularly nostalgic or bored.
Had I the inclination to continue publishing this blog, I might have updated the Virtual Furniture Police article with an example from the NSW education department. They are handing out the federal government-sponsored laptops but are constricting them so desperately as to make them almost completely useless. No internet access, and no installing new software, even for teachers. WTF?
Without a doubt, the lowlight would have to be one individual who responded to the Blogging Horror post with some moderately offensive comments. And the spammers. Oh the spammers.
Instead of blogging here, I intend to participate more actively on other forums (the terrible state of forum software notwithstanding), microblogging at identi.ca, and other social networking sites.
I’ll try to keep the about page up to date with links to my various online activities. Feel free to drop by.
]]>user_timeline.xml?count=100&page=1
. Not only that but they include a large amount of redundant profile stuff in the <user>
element. And not only that, but twitter sometimes returns a “Twitter is over capacity” page instead of your tweets.
What we want to do is a) detect any files which don’t contain tweets, b) remove the redundant user profile, and c) combine the results into a single file.
Well, friends, here is a shell script to do exactly that. You’ll need zsh and xsltproc, both of which are standard on MacOS X and most sane Linuxen.
zsh is needed to sort the input files in numeric, as opposed to lexicographic, order. If you know of a way to do this in bash, let me know…
Output is on stdout, so just redirect to your filename of choice:
$ tweetcombine user_timeline.xml\?count=100\&page=* \
> tweet_archive.xml
Here’s the script:
#!/bin/zsh
# Combine all of the twitter user_timeline.xml files specified on the command line into a single output
# Written by Alastair Rankine, http://girtby.net
# Licensed as Creative Commons BY-SA
input_args=()
for f in ${(on)*}; do
[[ -f $f ]] || exit "Not a file: $f"
input_args+="<input>${f//&/&}</input>"
done
xsltproc - <<EOF
<?xml version="1.0"?>
<!DOCTYPE inputs [
<!ATTLIST xsl:stylesheet id ID #REQUIRED>
]>
<?xml-stylesheet type="text/xml" href="#style1"?>
<inputs>
${input_args}
<xsl:stylesheet id="style1" version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output type="xml" indent="yes"/>
<xsl:template match="*">
<xsl:copy>
<xsl:copy-of select="@*"/>
<xsl:apply-templates/>
</xsl:copy>
</xsl:template>
<xsl:template match="statuses">
<xsl:apply-templates/>
</xsl:template>
<xsl:template match="user"/>
<xsl:template match="xsl:stylesheet"/>
<xsl:template match="input">
<xsl:choose>
<xsl:when test="document(.)/statuses">
<xsl:apply-templates select="document(.)"/>
</xsl:when>
<xsl:otherwise>
<xsl:message terminate="yes"><xsl:value-of select="."/> does not contain statuses element</xsl:message>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<xsl:template match="inputs">
<statuses type="array">
<xsl:apply-templates/>
</statuses>
</xsl:template>
</xsl:stylesheet>
</inputs>
EOF
I think this method of sticking filename arguments into an XSL document with an embedded stylesheet is quite a powerful way of processing XML documents with shell scripts. (Probably should put the <input>
tags into a separate namespace though…)
But I’m also not happy with the concept of just abandoning the blog, as so many others seem to do. I like the idea of putting it into hibernation, where it still can be linked to and indexed in search engines, but just not active.
So I’ve been working out how to do that. It’s not as easy as I expected. And, yes, worth blogging about…
I thought I wanted something quite straightforward. Basically I was going to convert the site into static HTML by walking over it with wget
or similar. Then find a host who could serve it up cheaply and reliably.
The cheap part was a requirement but also an expectation. I just thought that someone somewhere would give me some hosting space for static files that would charge about the same as my domain renewal each year. I’m not interested in paying much more than that because, well. I might as well just keep paying the hosting plan I’m on now, and keep the blog open.
I also wanted the new hosting to have cool URIs that don’t change. So if, for example, you’ve bookmarked my insightful not-to-be-missed 2007 post on fixing Ubuntu 7.04 display problems, you’ll be happy to hear that I intended to make sure it serves you the same long-obsolete advice for many decades to come.
And that’s the real trick. As I found out, keeping the URIs the same, is not as easy as it sounds.
This was the first thing I tried. wordpress.com offers free-to-inexpensive hosting and, hey, I’m already running wordpress, so it should be a snap to switch over. I’d just run the blog as-is with comments disabled and it would be just like in hibernation.
But then I read the fine print, and found that I wasn’t going to be able to keep my current theme with its famous hand-tuned aside formatting. Not ideal, but not a show-stopper either. Also, importing the images and other assets would require involvement from wordpress support. Bit of a pain, but liveable.
So as an experiment I started exporting from my blog and then importing to wordpress.com. How well did this work? Well, about as well as you can expect with software that is not designed to go clunk. Of course, there is no error message to determine what the problem was, nor even an obvious way to erase all the partially-imported content.
Having resolved by this stage that I was going to convert it to a static site and host it somewhere, I soon came upon the idea of using Amazon’s S3 service. I already had an account, and it looked like a great solution. Cheap, reliable and easy. (Pick any two.)
There are (at least) two problems with using Amazon S3 for this task.
Firstly there is a technical limitation with hosting a “naked” domain (ie girtby.net with no hostname). Basically the way you create a virtual host on S3 is to create a “bucket” with the same name as your host, and then create a CNAME from your domain to that bucketname.s3.amazonaws.com. The problem is that you cannot create a CNAME on the root of a domain, it has to be from a hostname within that domain (eg www.girtby.net will work, girtby.net will not).
So that instantly breaks my URIs, but even if I could solve that problem there’s another limitation. Basically Amazon S3 is not a full hosting environment and doesn’t provide some common web server features. Most notably it won’t serve up / using /index.html.
So I would end up needing my own web server which would perform this redirection. But that defeats the purpose of using Amazon S3; if I had access to a reliable web server, I’d just use that to serve up the site and be done with it.
Yes, I was quite amazed to discover that you can use Google’s App Engine as a host for an entirely static site. This is an interesting possibility because not only is it a real hosting service, and I can develop software (maybe even a blogging engine!) to bring my site back to life, should I be so inclined.
The plan is almost perfect, but has one flaw. Although Google seem to have once had the ability to host from a naked domain, that ability now seems to have been revoked. Which is a bit unfortunate, as it breaks my URIs.
So that’s really a problem, and one that I must admit I did not forsee when I started using the naked domain as the preferred domain for the blog.
]]>I was once quite enamoured with this service, so what changed?
Well basically they changed their prices. And by that I mean they increased their prices. And by that I mean their prices went through the roof.
Up until this month you could download 75 tracks/month for US$20. Now, you’ll pay US$31 for those tracks.
Obviously this isn’t the first time that someone has hiked their prices by 50%, but that’s not really the point of this post. Instead it’s the other, extremely deceptive, change that went along with the overt price hike. Your monthly eMusic fee no longer allows you to purchase 75 tracks; instead you’ll get 75 “credits”. They want you think that a credit equates to a single track, but it quite obviously doesn’t. If you’re purchasing by the album (and for many tracks you have to) then the number of credits required almost always exceeds the number of tracks.
For an admittedly extreme example, consider the epic post-rock album Lift Your Skinny Fists Like Antennas To Heaven by Godspeed You! Black Emperor. It has four 20 minute — mind-blowing and highly recommended — tracks. Before the pricing change, this was about 5% of the 75-track monthly quota. Now, downloading this album requires no fewer than 24 credits, which is about a third of quota, and about a 600% increase. None of the tracks can be downloaded individually any more.
So you might think, regardless of the increase, that still works out at just over US$10 for an album, which sounds very fair. But this just begs the question: why don’t they just charge the US$10/album and be done with it?
The subscription plan and the “credits” and all that nonsense is just annoying, and that’s the main reason why I’m leaving. I can pay US$10/album at Amazon and I don’t have to worry about my monthly quota, rollovers, unused credits, the terrible website, the mysteriously “unavailable” albums or individual tracks, and all of the other specific problems with the eMusic service.
I admit I had a good ride. And maybe something had to change at eMusic anyway. The whole premise of charging per track is fraught with problems. The economics of this is predicated on 4 min radio-friendly pop songs, and just doesn’t work out for other types of music.
What I really want is to pay a fixed amount for a fixed duration of music. I’d easily pay US$1 per 10 minutes of music. It seems like absolutely the fairest and simplest way of doing doing things, and the way that eMusic could have changed while still keeping my business.
Failing that I’ll just pay per album. But not at eMusic.
]]>This does seem to be blatant gouging on their part, given that bytes are bytes, and regardless of whether they are destined for a phone or a tethered laptop, the cost is the same. This criticism is warranted in my opinion.
Carriers may claim that tethered laptops inevitably draw more traffic from individual subscribers. But I would suggest that the incremental traffic from a tethered laptop is a lot lower for the iPhone than for other 3G phones. Let’s face it, the iPhone is a pretty capable standalone device, and you’ll rarely need to break out the laptop to get online. Other phones are far inferior at browsing the net directly, and so I’d expect that there is a correspondingly larger proportion of traffic from tethered laptops of subscribers with these phones. This makes the additional pricing seem even more unfair.
But not all of the hate should be directed towards the carriers.
I am yet to see an answer to this question: how do the carriers know which traffic originates from the iPhone itself and which from a tethered laptop? I don’t know the answer definitively but I the iPhone must mark the tethered traffic somehow. I’m guessing that it must pass through the PPP session from the laptop, instead of terminating it in the phone and NATting the traffic.
Regardless, it is Apple that deserves at least some of the blame here for enabling the carriers to detect traffic in the first place. I know of no technical reason why they needed to do this; it sounds like a purely business decision. And one they didn’t need to make; surely the carriers are all Apple’s bitches at this point?
Boo, carriers who charge for tethering. Boo, Apple.
]]>Anyway, one colleague didn’t believe that the answer in the back of the book was correct, and he offered to bet that by running a computer simulation he could prove the book (and me) wrong. I’m not a betting person, but for some reason, possibly euphoria at the prospect of the upcoming partying seminars, I immediately accepted his bet, wagering $100.
What follows is my attempt to win that bet.
So that there is no argument, I’ll reproduce the exact wording of the problem as stated in the puzzle book:
90. Four different pieces of candy are placed in a bag. One is chocolate, one is caramel, and two are licorice. Without looking in the bag, I draw two pieces of candy from it, and place one of them, which is licorice, on a table.
What are the chances that the second piece of candy I have in my hand is the other piece of licorice candy?
My colleague said the answer is ⅓, simply because the candy in the hand can only be one of three other candies still unseen. Of course this is a classic Monty Hall conditional probability problem, and he is quite wrong.
The key insight to this puzzle is that when I (as the person stating the puzzle) am putting the piece of candy on the table I am selecting it. Just as Monty does when he picks the door with the goat. There’s no element of randomness.
So the correct way to assess the probability is to think about the possible combinations of candies in your hand. There are six: the chocolate and caramel, caramel and either licorice, chocolate and either licorice, and the two licorice. Now we know that one of these combinations, the chocolate and caramel, is not possible. There are five remaining possibilities, and one of these is the one we want. Hence the odds are ⅕.
Anyway the agreed method of settling the bet was to write a computer simulation, so I did just that. Here is the output of a sample run:
Out of 1000000 tries, two licorices were extracted 200432 times.
Estimated probability = 0.200432
We have a winner. Thanks JT, cash will be fine.
Here is the C++ code:
#include <stdlib.h>
#include <algorithm>
#include <iostream>
#include <tr1/array>
const long iterations = 1000000;
enum candy {
chocolate,
caramel,
licorice
};
int main(int argc, char *argv[])
{
::srand(::time(NULL));
// Number of times we've pulled out two licorice from the bag
long two_licorice = 0;
for(long i = 0; i < iterations;)
{
// put the candies in the bag
std::tr1::array<candy, 4> bag = {{ chocolate, caramel, licorice, licorice }};
// shuffle them
std::random_shuffle(bag.begin(), bag.end());
// pull out two
std::tr1::array<candy, 2> hand = {{ bag[0], bag[1] }};
// At least one of the candies we pick out must be a licorice otherwise it doesn't count.
if (hand[0] != licorice && hand[1] != licorice)
continue;
// Count if we've got both licorice
if (hand[0] == licorice && hand[1] == licorice)
++two_licorice;
++i;
}
std::cout << "Out of " << iterations
<< " tries, two licorices were extracted " << two_licorice << " times.\n"
<< "Estimated probability = "
<< static_cast<double>(two_licorice) / static_cast<double>(iterations)
<< std::endl;
return 0;
}
]]>So this is what I’m reduced to. Blogging about shelves that I put up last weekend. Yes, shelves.
Those skilled in the art of home ownership will recognise instantly these as Ikea Gorm. But look closely, see how they wrap around the down-pipe? Remember I’m not a hardware guy. So I’m quite proud of myself for cutting one of the planks in each shelf and using the off-cut to bind it to it’s neighbour. Bit of a hack, but I defy you carpenters to come up with a better solution.
Another thing to note is that only suckers put together furniture with an allen key. Power drills FTW.
]]>If you have, I’m happy to say yes. Yes, I can.
The two headphones described below are about as much as I can imagine spending on what are basically little speakers that you strap to your head. So this article isn’t so much a review as a freakshow; check out the guy with the weird obsession and the lack of self-restraint!
These are the big brother to the D2000s that I mentioned in my last article. Thanks to an uncharacteristically well-timed purchase, I managed to get these for US$425 from Amazon. They went up massively after I bought them, and I see they are now back around that price, but the AUD has dropped significantly.
US$450-odd is a lot of cash for headphones, and I’ll be the first to acknowledge that fact. What’s more, the D2000s are almost US$200 cheaper, and the difference is incredibly difficult to justify in objective terms. As I said before, the D2000s are already a fantastic headphone. Supremely comfortable and capable headphones that are easy to drive. Really, there’s no rational reason to upgrade them.
However, upgrading to the D5000s is not a decision I regret in the slightest. To my ears the D2000s suffer from very mild boominess in the bass department which is tamed nicely by the Real Mahogany Cups of the D5000s. Or maybe it’s some other bit of audiophile wankery.
I like music, and I especially like to hear it with as much clarity, impact and presence as possible. There’s nothing better on a Friday night than to kick back with some favourite tunes, some quality headphones and a glass or two of red wine. The D5000s are frankly perfect for this job. I’m pretty sure that without going for the insane Senheisser HD800s (at a wallet-busting AUD2400!) you’re really not going to do any better.
You can’t sit around all the time drinking wine and listening to ridiculous headphones. Oh no, not by a long shot. Sometimes you’ve got to get out on the street. And listen to ridiculous headphones.
Behold the Ultimate Ears Triple-Fi 10vi IEMs. You stick them in your ear and instead of hearing the sounds of nature, children’s laughter, oncoming semi-trailers, and so forth, you hear … whatever you want to hear. It’s amazing.
As I alluded to last time, the big selling point about the IEM is that it blocks sound from outside. This means you don’t need noise-cancelling and all that nonsense. It also means you don’t need to destroy your hearing trying to drown out the noises around you. In summary, good IEMs are basically earplugs with speakers in them.
Last time around I didn’t have a choice for an IEM. I tried the little brother to these, the Super-Fi 4vi. Prettymuch the only claim to fame for those was the fact that they would fit into the original iPhone headphone jack. But they sounded awful, so I ditched them.
Later, in a moment of bonus-fuelled excitement, I clicked a button and was A$640 the poorer. But when the Triple-Fis arrived, I was so much the richer. After some time trying out the different tips and getting used to different insertion techniques (yeah, I know, that’s what she said) they sound absolutely amazing. Better than most full-size headphones out there, in fact.
Once you’ve got the secret of getting the “seal” just right, they are quite comfortable, and the sound is incredible. It’s obviously more in-the-head than the Denons but the frequency response is amazing. From the deepest bass to the crispest treble notes, all emanating from these tiny little speakers stuck in your ear canals.
Had a bad day? Seriously, get yourself a pair of these and a portable music player, then go out into the evening and walk the streets. Great way to clear the head.
But look both ways before crossing the road, because you won’t hear that bus otherwise.
]]>The story was about a satellite that was crashing to earth. It was almost certainly Salyut 7, which came down in 1991. The memory of Skylab, which crashed in Australia in 1977, was still present in people’s minds. As always, the media was anxious for a local angle, and the possibility of a Skylab re-run, with an added dash of panic-mongering, was too tempting for them to resist.
Media Watch tracked the published predictions of the crash site as the re-entry date approached.
A few weeks out, some media outlets reckoned that that the satellite would fall somewhere in the Indian Ocean, Australia, or the Pacific.
A week out, the predictions narrowed to mainland Australia.
Days away, and it looks more like Western Australia. Towns such as Kalgoorlie are becoming extremely worried at this point. Rumours abound of satellite crash insurance being sold to worried locals.
On February 7 1991, Salyut 7 crashed to earth.
In South America.
Littlemore delivered the punchline, declaring the reporting as “a lesson on the difference between precision and accuracy.”
]]>Keen observers will have noted that I have tended to blog each time I try out a new version control system, and this really isn’t an exception. Except that, well I’m not just trying it out, I actually use Bazaar daily at $WORK
, so and this is like after-hours practice.
Anyway, I wanted to share this because I’ve found that maintaining a staging and production installation of wordpress, complete with custom modifications and a collection of plugins, is a problem ideally solved by a distributed source control. Plus I really like Bazaar, and wanted to show how easy it can be.
I refer to two different machines here, one of which is the production server (ie my hosting provider), and one of which is the staging server (ie my personal machine). Don’t let the fancy terminology put you off; mentally substitute “my box” and “their box” if it helps you.
On my staging server I branched the wordpress source from launchpad’s Wordpress repository, which is regularly synced with the official repository:
[s] $ bzr branch lp:wordpress
[I'll use an [s]
to denote commands run on a staging server, and [p]
for commands run on the production server.]
This command creates a "working tree" of the wordpress source code — a set of files and directories — and an accompanying repository of revisions. At any time the working tree corresponds to one of the revisions in the repository, plus any uncommitted changes. Each commit creates a new revision in the repository. Pretty standard stuff really.
So for a new wordpress installation I add the wp-config.php file and commit it:
[s] $ bzr add wp-config.php
[s] $ bzr commit -m "Added config file"
See the codex for other local setup instructions, I just want to focus on the source control tool for now.
Unless you've used distributed version control systems before, you might be a bit wary at this point, perhaps wondering what happens when I next communicate with the upstream repository. But fear not, this is exactly the point of a DVCS. I've created an independent branch, and the parent branch doesn't even need to know about mine. So, I can quite happily make changes of my own and also merge in upstream changes, knowing that it is all tracked correctly.
But for now let's look at going the other way: publishing my changes to the world.
One of the cool things about Bazaar is that it supports many different protocols for publishing branches. So for example, I can just push my branch to the hosting server using sftp:
[s] $ bzr push sftp://girtby.net/home/alastair/wordpress
This will create a repository on the remote server containing all the revisions in my local repository. It will not, however, create an associated working tree. Bazaar does not (yet?) support updating a working tree over sftp. I guess there are too many potential issues with local conflicts and such. Anyway the solution is to ssh into the production box and do a checkout of the published branch:
[p] $ cd ~/wordpress
[p] $ bzr co .
What's that? You don't have Bazaar installed on your hosting provider? No problem - all you need is python. Just extract Bazaar into your home directory somewhere, add the bin directory to your path, and you're away. You don't even need to compile anything.
Of course there are many other tasks to set up a production wordpress, but again let's just focus on getting a Bazaar branch associated with the source files.
Of course you could bootstrap everything the other way around. Start with a working wordpress installation on your production server, create a bazaar repository for it, then copy that to your local machine. This would be something like:
[p] $ cd ~/wordpress
[p] $ bzr init .
[p] $ bzr add .
[... bzr rm --keep or bzr ignore the files you don't want ...]
[p] $ bzr commit -m "initial checkin"
You would then branch it locally using:
[s] $ bzr branch sftp://server/path/to/wordpress
At this point you have two Bazaar branches, and they can easily be kept in sync as follows. First let's make sure we're running the latest wordpress on our staging server:
[s] $ bzr merge lp:wordpress
This just says to merge the latest changes from upstream. As always with merging there is the possibility of a conflict; you make a change that conflicts with the change on the merge source. In general Bazaar is very good at handling these, and anyway you're very unlikely to encounter them unless you're making modifications to the Wordpress core.
At this stage it's a great idea to test the installation locally. Hypothetically, if there were any unit tests, you'd run them at this point. Otherwise, you can just check that the articles display properly, the admin interface works, and whatever else.
You can even see a summary of the changes that you're merging:
[s] $ bzr status -v
modified:
wp-admin/admin-ajax.php
wp-admin/custom-header.php
[... snip ...]
pending merges:
westi 2009-02-22 Focus on the first blank field when asking for credentials for upgrade/instal...
ryan 2009-02-22 Allow editing all of a plugin's files. see #6732
westi 2009-02-22 Wrap the apply_filters call in a function_exists check as this can be calle...
[... snip ...]
If everything looks OK, commit and push it up to the production server.
[s] $ bzr commit -m "merge from upstream"
[s] $ bzr push
It should have remembered the push location from last time. Also, just like last time the working tree will need to be updated:
[p] $ bzr up
So I've found wordpress trunk to be fairly stable, but should I ever need it, the DVCS provides a safety net.
If I discover a problem after merging in the latest upstream changes, I can quickly revert simply using the entirely surprising:
[p] $ bzr revert
If, on the other hand I only discover the problem after pushing up to the production server, it's still quite easy:
[p] $ bzr uncommit
[p] $ bzr update
I can propagate that change back to my staging server by merging back:
[s] $ bzr merge sftp://girtby.net/home/alastair/wordpress
But like I said, I haven't had to use this.
Changes made on the production server are generally easy to sync back to the staging server - I just download a database dump and import it locally. However media such as images are special because they are not stored in the database. Hence you need a way of getting them back to the staging server. I wonder what the answer could be?
Yep, just commit the changes on the production server and merge them back:
[p] $ bzr add assets/2009/01/funny_picture.jpg
[p] $ bzr commit -m "Added funny picture"
[s] $ bzr merge sftp://girtby.net/home/alastair/wordpress
As noted before, the add command can recursively add all files in a directory, so you don't even need to specify the files individually.
Plugins and themes are handled just like any other change. Just extract the plugin to the relevant directory, and add it to the repository:
[s] $ unzip ~/Downloads/coolplugin.zip
[s] $ bzr add coolplugin
At this point I'd probably activate and test the plugin locally, then commit and push to the production server as before.
Bazaar can also track file renames, deletions and moves, but you obviously have to tell it about them. So when coolplugin is updated, be sure to tell Bazaar about any relevant changes before committing:
[s] $ bzr mv --after coolplugin/oldandbroken.php coolplugin/newhotness.php
The --after
switch tells Bazaar not to actually do the move, instead it's already happened and we're just recording the fact.
... Wordpress auto-update? Put simply, I don't trust it. Will it let me manage my own patches to wordpress should they be needed? I don't think so. Also: FTP? What decade is this? Even FTPS, sheesh.
... git? Well no reason particularly. I've dabbled with git, but it never really clicked for me. The concepts and terminology and command set still seem slightly obscure to me: "Want to check in your changes? Just use 'git albatross'! Want to view checkin comments from the most recent merge? No problem, 'git ham-sandwich' is at your fingertips!" OK, I exaggerate a little.
... just shut up already? Oh, OK then.
]]>