Part of the new geolocation api provides a method to get a user’s current position. In theory that’s awesome, but in practice it was returning inconsistent results. Sometimes it was accurate, other times it was off by miles. If I tried it multiple times in a row it would get more accurate, but only some of the time. The app I was working on needed better accuracy, so I tried a few things.

First I tried specifying the precision parameter (by default it is set to low precision):

navigator.geolocation.getCurrentPosition(centerMap, centerMap, {enableHighAccuracy:true, maximumAge:0, timeout:30000});

This was MUCH more accurate, but it took about 45 seconds on my iphone to respond. Not okay. I looked to another solution: watchPosition(). It basically calls getCurrentPosition() multiple times so it can narrow in on where you are if you’re moving.

navigator.geolocation.watchPosition(centerMap);

Boom. Much better results. BUT it’s being called a lot, and I don’t want my map jumping around, I just want 1 accurate reading. I came up with the following solution:

var watchCount = 0;

var watchId = navigator.geolocation.watchPosition(centerMap); 

function centerMap(location)
{
var myLatlng = new google.maps.LatLng(location.coords.latitude,location.coords.longitude);
map.setCenter(myLatlng);
map.setZoom(17);
watchCount++;

//do it 2 times
if(watchCount>=2)
{

//show current location on map
var marker = new google.maps.Marker({
position: myLatlng,
map: map,
icon: ‘http://www.google.com/gmm/images/blue_dot_circle.png‘,
clickable: false,
zIndex:1
});
//don’t need to watch anymore
navigator.geolocation.clearWatch(watchId);
}
}

Basically it starts watchPosition until 2 positions are returned (I mentioned earlier how the 2nd result is always more accurate than the first), then it stops following the user. I set the threshold to 2 because the success callback is only called when there is a change in location, so if I had set it higher and the user wasn’t moving I wouldn’t get what I want.

Check out the spec for more info!

 

Every once in a while I get a submission to Song Key Finder that contains some weird characters (they showed up as boxes). I use the MusicBrainz database api to verify song and artist information, so my first thought was to consult with them. Since the characters appeared to be showing up in non-english songs, I looked into their international information:

MusicBrainz uses UTF-8 for all its data, which means that all the data is stored in Unicode and supports lots of different languages.

Got it. So now that I know their info is in UTF-8 I needed to make sure my page was being interpreted with that. Smashing Magazine provided a great article about character encodings which explains everything!

All the encoding problems…are caused by text being submitted in one character set and viewed in another. The solution is to make sure that every page on your website uses UTF-8. You can do this with one of these lines immediately after the <head> tag:

<meta charset="UTF-8">
<meta http-equiv="Content-type" content="text/html; charset=UTF-8">

It has to be one of the first things in your Web page, as it will cause the browser to look again at the page in a whole new light. For speed and efficiency, it should do this as soon as possible.

So all I had to do was just add this as the first line to the head, and all the weird characters started showing up normally (most were accented a’s and e’s)!

<meta http-equiv=”Content-Type” content=”text/html; charset=utf-8″ />

 

While attending the JavaScript conference FluentConf last week in San Francisco, I heard a few people talking about a tool called Errorception. They said it offered an easy way to track all the client-side errors your users are experiencing. Since I’m a fan of an error-free website, I decided to give it a try.

Signup was super simple – you don’t need a credit card to try it out and you can be up and running within minutes. I’ve only been using it for about a week so far, but here’s my list of pros and cons:

PROS

  • Reasonably priced. Their cheap plan is $5/month. Their 30 day free trial allows you to test it out without committing to anything.
  • Extremely high performance. After installing the code I profiled the load time of the script and it was about 5ms. I don’t think I’ve ever put anything on my pages that loads that quickly, not even static css on my own server.
  • Great interface. It groups duplicate errors to avoid muddying the reports. Everything is consolidated in an easy to read fashion and you can even get periodic emails letting you know of new errors.
  • Real-time. Errors started pouring in within minutes. I reached my daily cap (for my trial account) within about an hour.
  • It’s run by entrepreneur/developer Rakesh Pai. I emailed Rakesh a few questions and he went above and beyond helping me out. Rakesh is an extremely intelligent and thoughtful human being who wants everyone to succeed. He claims he’s no rock star, but what he’s been able to accomplish puts him in the class of Jimi Hendrix.

CONS

  • I’m having a hard time trying to replicate some of the errors. It provides you with the user’s browser and user agent and line number of the offending script, but that doesn’t seem to be enough. IE line numbers are reportedly innaccurate, and with so much JS minimized it’s hard to really trace the problem. Stack traces are not available. The website explains the reasoning for this and I agree with the choices that were made, so I’m not sure what to suggest. Since the goal of using this service is to eliminate errors I’m not sure how successful I’ll be if I can’t replicate them first.

SUMMARY

Great service run by great talent. The only shortcoming is perhaps a domain-problem that can’t be solved or a shortfall of my own programming intellect. I’d highly recommend anyone give it a shot by at least trying it out now.

 

We all hate captchas, so when I saw a new captcha alternative called Qaptcha I was slightly intrigued. Instead of typing in mangled words, you simply slide a slider bar to prove you are human. How simple!

See the demo here.

But the more I looked at it, the more I thought about how it could be easily faked. All it’s doing is requiring a simple action on the client-side which would just call some ajax or something to set a cookie that you’re human. So why can’t a bot just call that javascript?

Turns out this is not secure at all. I posted on stack overflow and someone responded with a hack within a few minutes. Pretty impressive, but also pretty disappointing that it doesn’t accomplish what it’s supposed to.

 

I was curious if javascript was still executed by the browser even if it was within an element that is hidden in the DOM. A quick search yielded no conclusive evidence, so I decided to test myself.

Quick answer: Yes, it does still execute – for both inline and included javascript regardless of how it’s hidden (display:none or visibility:hidden).

I made a simple test script to see if scripts still run if hidden. Then I tested it on: Internet Explorer 7, 8, 9, Chrome, Safari, and Firefox and all work the same way.

Here’s the test script I used. Within each browser, I got 7 popup windows. In case you’re wondering. I wanted to test to see if including a javascript made any difference, so the ‘testjs.js’ script just makes a simple alert call.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
                <script>
                alert('head');
                </script>
</head>
<body>
                <div>
                                visible div
                                <script>alert('visible div');</script>
                </div>
                <div style="display:none">
                                display:none
                                <script>alert('displaynone div');</script>
                </div>
                <div style="visibility:hidden">
                                visibility:hidden
                                <script>alert('invisible div');</script>
                </div>
               
               
                <div>
                                visible div
                                <script src="testjs.js"></script>
                </div>
                <div style="display:none">
                                display:none
                                <script src="testjs.js"></script>
                </div>
                <div style="visibility:hidden">
                                visibility:hidden
                                <script src="testjs.js"></script>
                </div>
 
</body>
</html>
 

If you have a PHP script that may take a long time to process (more than a couple minutes), make sure you’re printing output as it gets generated. If the page takes too long to load then some browsers will close the connection if nothing is being sent to it. All you have to do is print out a character to keep the connection open, so if you’re in a loop, trying printing “.” every iteration.

Hope that helps!

 

I used the Codiqa Screen Designer on the JQuery Mobile homepage and while at first I was impressed, I noticed quite a bit of extra padding and spacing. I tried to override the CSS, but the best way to get rid of it was to remove all of the fieldcontain and controlgroup divs that it adds in:

 <div data-role=”fieldcontain”>
                    <fieldset data-role=”controlgroup”>
Now it looks like a normal form!
 

If you ever come across a website that renders as though it’s in IE7, even though it’s IE9 or later, try checking the meta tags. It probably has this set:

<meta http-equiv=”X-UA-Compatible” content=”IE=EmulateIE7″>

This tells the browser to render as though it’s Internet Explorer 7 no matter what the version.

If you need to detect whether this is happening programmatically, you can do so with the following attribute:

if(document.documentMode==7) ….

Hope that helps!

 

We recently tried Google’s Pagespeed Service to see if it was quicker and easier than using a CDN. Quick answer – it didn’t help us too much. But I did encounter a number of weird 404’s when testing it out that I thought I might document.

Before we made any permanent changes we of course wanted to test it out locally. We did so by modifying our proxy settings for the site we were testing. We have several resources on amazon cloudfront, and some of them just weren’t getting loaded properly!

It had to do with using “Proxy Switchy!” to run our test. I had set up a different proxy that I could switch to when I wanted to test so that I could try side-by-side tests easily. But for some reason that wasn’t working. If you follow the recommended settings Google suggests you won’t have this issue. Just modify your main proxy connection instead of creating a separate one to test with.

Hope that helps!

 

When developing a mobile app, I noticed that one person’s profile picture was showing up rotated sideways. When I visited this on my computer browser, it appeared fine, but when trying with an iPhone it was always to the side!

At first I thought this was a coding problem, but I realized that if I access the image directly through the url it does the same thing! Turns out that some cameras store orientation data with the image. Most desktop browsers don’t care about this, but for some reason mobile browsers do so they read that information and rotate the image the way it was taken.

An easy solution is to remove the EXIF data associated with that image. I did a quick google search and found sites that would strip exif data of any image and allow you to resave it. Doing that fixed it!

Hope that helps!

© 2012 ShaneLabs Suffusion theme by Sayontan Sinha