Posts in category internet


Mini-rant/follow-up: It has almost been two years since I wrote about my issues interfacing with OpenID, and since I have recently been getting deprecation warnings from Google, I finally put in the work to optionally support password authentication. That will teach me to try to do the right thing.

Similar Projects to SandboxOS

When I started working on SandboxOS, I was somewhat in disbelief that nobody was doing this already. Since then, I have discovered a handful of projects with similar goals, but as far as I can tell, it is still unique enough to continue pursuing.

JavaScript Contenders

There are a handful of JavaScript-related projects with similarities.

node.js + io.js

Node.js, and by extension io.js, are essentially another runtime environment similar to those of Perl, Python, and Ruby.

node.js popularized using JavaScript outside of the web browser, especially web server-side, allowing web application to be built completely in JavaScript. But ultimately node.js and io.js compete in an already overpopulated niche.

SandboxOS is different, because it introduces a security model and an application model.


node-os is an operating system built from node.js running on a Linux kernel.

This seems to be the result of taking the package manager from node.js, npm, to its extreme and using it for managing all system files.

There is not really much comparison with SandboxOS. It is just another interesting projects in the same area.


Runtime.JS is an operating system kernel built using V8 and JavaScript.

It is an attempt at eliminating one layer of the stack commonly present in node.js applications.

SandboxOS, for better or worse, adds another layer.

Linux Container-based Contenders

I am not especially familiar with recent developments in Linux containerization. I've gather that it is a step beyond virtualization, allowing some amount of isolation without sacrificing performance.

SandboxOS different from all of them in that it attempts to make it easy to host web applications on all sorts of devices - anything where web browsers are found. While I have yet to measure the performance implications, the goal is to move toward having many more servers with many fewer (often just one?) users.

Docker / Rocket

Docker is the current dominating presence in this area. It appears to be tailored primarily toward sysadmins.

Rocket is an alternative being built for CoreOS.


Sandstorm is the first project that actually concerned me that I might be stepping on somebody's toes. Their goals are very much aligned with mine, but they're taking a completely different approach, using Linux containerization. Interestingly, Sandstorm does a good job explaining why containers are great, but they don't solve enough of the problem.

They aim to make it trivial to run servers and make a better future for open source webapps, and I wish them luck.

Editing Apps

It may just be because programmers like strange loops, but being able to edit applications from an application which is itself editable is a fundamental assumption in SandboxOS.

SandboxOS targets the most devices by far.

OLPC Develop Activity

I think I first heard such a goal proposed for the One Laptop per Child project. Their Develop activity seemed to be down-prioritized pretty quickly, but it was a neat concept and it is a neat project.

Wiki OS

I'm not clear on the origins of Wiki OS, but it seems to be an attempt at reproducing a traditional desktop environment on the web on devices where Silverlight can be run, and it allows modifying applications with an interface that I would compare to Visual Basic.

Android AIDE

I encountered AIDE while in the middle at marveling at how seemingly unnecessarily difficult mobile development is made by the available toolchains. That project puts all of the tools necessary to build Android applications on-device.


One final theme I am watching for activity on is decentralization.

This recent article hints at future tech to decentralize web applications "like BitTorent does" without any information on how that can work.

Unhosted web applications are another trend of building web applications without a server component altogether.


There is a lot of activity in this area. So far, SandboxOS seems to be going in the right direction and not stepping on anybody's toes, so I'm going to charge forward with it!


This is a continuation of TLS WTF. I added SSPI / SChannel support to my list of supported TLS backends. It was not a pleasant experience, so I am taking another moment to document my grievances.

SSPI is a shining example of an API that is hard to use correctly.

  1. Microsoft documents that rather than linking against an export library, you must load security.dll dynamically and look up a symbol.
  2. To begin the TLS handshake, you call InitializeSecurityContext or AcceptSecurityContext, depending on whether you are the client or server. The credentials already know which direction we're going at this point, so this seems redundant.
  3. One argument to those functions is a pointer to a handle that is populated when the call succeeds, and a pointer to that handle must be passed back as a different argument on subsequent calls. But not before it succeeds once.
  4. InitializeSecurityContext, AcceptSecurityContext, EncryptMessage, and DecryptMessage all take arrays of input and output SecBuffer structs. After working with two other libraries that had much simpler mechanisms for passing data in and out, this just seemed wrong.
    1. Data is encrypted and decrypted in-place. While it might seem sort of clever, as though it's saving something or destroying no-longer-needed sensitive information, due to the way it works and the lack of any apparent guarantees, I found I needed an extra copy of the data anyway in order to properly restore the remaining unencrypted/decrypted bytes for the next pass.
    2. Each particular type of invocation has special requirements on the number and type of buffers it takes as input and returns as output. It's basically deliberately throwing away all parameter/type safety. Or conversely it's requiring the user to know quite a lot about the specifics of the protocol.
    3. One type of SecBuffer contents that is returned is the number of data bytes passed in that haven't been consumed. But not a pointer to the bytes like every other buffer.
  5. Shutting down a connection is bizarre. Call ApplyControlToken and then proceed as though you're starting a new handshake. I realize only now that I'm doing this incorrectly.
  6. I was pleased that I was pretty quickly able to load a PEM certificate and private key. And then it took me the better part of a day to be able to associate them with each other to be able to use them.
    1. A certificate context (container?) seems to be necessary. There is a way to get an anonymous one, but it doesn't seem to work for this purpose. I haven't found a way to get one if it exists and create it otherwise except by trying one and if it fails, trying the other.
    2. It is necessary to open a certificate store with the name "MY". What?
    3. A property "CERT_KEY_PROV_HANDLE_PROP_ID" on the certificate context looks exactly like what I would want to set to associate a key with a certificate, but it has no apparent effect. "CERT_KEY_PROV_INFO_PROP_ID" is the only one that apparently actually works, but it requires jumping through a bunch more hoops.
    4. All of this leaves some key/certificate data in some system location. I haven't found any way to avoid that.
    5. Most of the API worked with multi-byte or wide characters. One or two functions didn't.
    6. I've completely lost track of which things are opaque data types, handles, pointers to handles, or pointers to pointers to handles. It's out of control.

Again, I just hope someone stumbles on this or browser:projects/sandboxos/trunk/src/Tls.cpp, and is saved some of the hassle I went through.

I realize I am not done, but I really need to put this aside for now. Some future tasks include:

  • Support allowing individual certificates whose signing authority isn't otherwise respected (self-signed certificates).
  • Need to separate the certificate and key setup from the connection. I'm probably adding an awful amount of overhead for each connection by re-importing them each time.
  • Need to expose error messages. I've just been instrumenting the code as I go to discover what I've been doing wrong, but the actual errors are useful, so I need to make them available.
  • Need actual tests.
  • I want to be able to generate self-signed certificates on all platforms.


This is my recollection of my misadventures in trying to support TLS for SandboxOS.

The Goal

I have a JavaScript runtime environment. I want it to be able to do these things:

  • Run a web server that supports HTTPS to secure or even just obfuscate my traffic.
  • Connect to web servers over HTTPS to post to Twitter and whatnot.
  • Connect to XMPP and IRC servers requiring secure connections to run chat clients and bots.

I want it to be able to do those things on Linux, OS X, and Windows, and I don't have a very high tolerance for complicated build steps or large dependencies.


"I want to use SSL," I thought to myself. Apparently what I wanted is called TLS these days. I scratched my head and moved on, because this was the least of my problems.

Going in, I knew that OpenSSL had recent vulnerabilities, leading it to be forked into LibreSSL. I knew that GnuTLS was a thing. Outside of that, I did not have much knowledge about what was available.

I did some research and found that I should look into Mozilla's NSS as something that could potentially work on all the platforms I cared about. I also learned that if I wanted to support the most common libraries on each platform, I would need to look into SSPI on Windows. I also saw that node.js uses OpenSSL on all platforms.

I'm using libuv for doing all of my socket I/O, and I'm pretty happy with it. But it poses a challenge here, because these libraries tend to prefer being used as a wrapper on top of native BSD-style sockets and I didn't want TLS interfering with libuv's event loops at all. I chose to try to pass data around myself in order to avoid that scenario. It looks like NSS creates unnecessary intermediate sockets to abstract away that interface.

Getting Started: Build the Libraries

I believe it went like this:

  1. Look at Mozilla NSS (Network Security Services).
    • GPL-compatible license. Good.
    • Supports the platforms I care about. Good.
    • Get it and try to build it. They have a non-trivial custom build harness. Not a great sign, but OK.
    • Try to build it for win32/x64. Discover this note:

      Note: Building for a 64-bit environment/ABI is only supported on Unix/POSIX platforms.

    • Looked at the API a bit. It looks like it really wants to be bound to system sockets. That will incur unnecessary overhead the way I want to use it. Lame.
    • That was enough strikes for me. I wan't about to maintain my own builds of this for three platforms, and I certainly wasn't about to let this constrain me to Win32/x86, if that note was correct.
  2. Look at LibreSSL.
    • GPL-compatible license. Good.
    • It looks like the latest release is expected to be usable. Great.
    • It only builds on Windows through MinGW. Ugg.
    • By this time I had some OpenSSL code. I tried it on OS X and found that the OpenSSL system library on OS X was supported but has deprecated for some time. Crap.

I Wrote a TLS Wrapper

I resolved to support OpenSSL on Linux, the Secure Transport API on OS X, and SSPI on Windows, and all in code that could be easily extracted to use for other projects.

Currently the entirety of the public API looks like this, though I haven't yet tackled SSPI:

class Tls {
    static Tls* create(const char* key, const char* certificate);
    virtual ~Tls() {}

    virtual void startAccept() = 0;
    virtual void startConnect() = 0;
    virtual void shutdown() = 0;

    enum HandshakeResult {
    virtual HandshakeResult handshake() = 0;

    enum ReadResult {
        kReadZero = -1,
        kReadFailed = -2,
    virtual int readPlain(char* buffer, size_t bytes) = 0;
    virtual int writePlain(const char* buffer, size_t bytes) = 0;

    virtual int readEncrypted(char* buffer, size_t bytes) = 0;
    virtual int writeEncrypted(const char* buffer, size_t bytes) = 0;

    virtual void setHostname(const char* hostname) = 0;

Implementation Rants

I don't know where to begin.

  1. There are not enough good examples. If you are an engineer at Apple who worked on the Security framework, where did you put your ~100 line C file test case that fetches and fails if you try to reach the same server by a different name?
    • This was the best I could find for OpenSSL. It's alright but 12 years old.
    • This is why I haven't attempted SSPI yet. I think SSPI will be manageable, but that example is insane.
  2. Do not pretend TLS connections are BSD-style sockets. TLS is a terribly leaky abstraction. Socket calls that ought not to block might need to block so that TLS handshaking can finish. New errors can occur at every turn. A TLS session can close without the network connection going away. Stop pretending it's not a separate layer.
  3. Verify hostnames. TLS without hostname verification isn't secure. OpenSSL did not make verifying hostnames easy. The headers I have on debian jessie required me to extract the names from the certificate and do my own pattern matching to account for wildcard certificates. I had several implementations where it appeared the default verification would check names, but only testing showed that it wasn't happening. Apple made it a fair bit easier, but it still didn't happen by default.
  4. So far I have not yet found how to load my certificate and private key from PEM files on OS X without calling this private function from the Security framework:
    extern "C" SecIdentityRef SecIdentityCreate(CFAllocatorRef allocator, SecCertificateRef certificate, SecKeyRef privateKey);
    All other attempts I've made at getting a SecIdentityRef from my key and certificate have failed. I could keep them in the login keychain, but I want to support PEM on all platforms for consistency.


So SandboxOS does what I need for TLS for now. It took me five times longer to write than I had hoped, and doesn't support TLS on Windows yet.

It can connect to secure servers. It verifies hostnames when you do that. I can run a secure web server with it. It might be fun to support client certificates, but that is about as far as I want this to go, and that can happen later.

I'm hoping somebody searching for some of these words will stumble upon this and either show me how stupid I've been or benefit from browser:projects/sandboxos/trunk/src/Tls.cpp.


I added SSPI support.


Several months ago I got a notion for a potentially enormous programming project I wanted to undertake. I've spent bits of time on and off implementing it, and it has come quite far. The last thing I want to do is stop all of the exciting work to document it, but after some reflection, that is the thing that it most obviously needs right now.

What follows might be crazy-town, but it still seems pretty neat to me.

What Is SandboxOS?

I struggle to find a comparison that does it justice, but I keep trying:

SandboxOS is...

Nuts and Bolts

  • It runs applications on Google's V8 JavaScript engine.
  • It uses libuv, from node.js, to interact with the OS when possible.
  • Each application runs as a separate process, as a first step toward implementing Google Chrome's Sandbox architecture.
  • Applications can interact with other applications by calling exported functions, provided that they have requested and been granted permission to do so.

The Last Thing I Wanted

I tried everything I could think of to meet the goals if this project without resorting to using JavaScript.

I ended up settling on JavaScript for a few key reasons:

  • Web browsers are by far the most widespread use of a sandboxed runtime that I could find. Plainly, I want to force an opt-in decision whether an application can invoke any individual thing outside of the application.
    • I looked at a few attempts at sandboxes for languages like Perl, Python, and Ruby, but it was clear that sandboxed execution was so far from the goal of those runtimes that it was a lost cause. I even passed up on node.js for the same reason - it was easier to start from scratch than to audit and secure everything there. The few approaches that looked sound were not portable or incomplete.
    • I know that Java and .NET make security claims, but I ruled them out early, because the security models seemed far from anything I could leverage for my specific needs, and the licensing and portability concerns are still pretty huge.
    • Web browsers run untrusted code all of the time. They crash tabs and eat memory, but at the end of the day, they are the one example of sandboxing I could find that are essentially globally relied upon.
  • Even if JavaScript isn't perfect, it has features that give it huge potential for my needs.

Circles and Arrows

Below is the bigger picture that I see this all fitting into. The interesting thing to me is not what I specifically imagine this thing doing but what little effort in implementing a few small parts it has taken to enable these sorts of things.

GraphViz image


I intend to follow this up with related posts on more focused topics. If nothing else, this project has forced me to learn at least a little bit more about a lot of things I wouldn't have otherwise interacted much with:

  • V8
  • libuv
  • TLS
  • JavaScript
  • C++11

For now, some SandboxOS links:

Good Riddance to Google Reader

I am a little bit surprised at the response I have seen to Google's announcement that they are shutting down Google Reader.

For me, they killed Google Reader in October of 2011 when they broke sharing. A little over a year later, I wrote my own replacement just to get proper sharing again.

  1. I don't want to hear about any silly petition to bring Google Reader back. It was already a husk of what it should have been. If it somehow returned by popular demand, it would not be run and maintained with the care it deserved. Let it go. Find an alternative.
  1. To the multiple projects that claimed you were going to produce a replacement following the crippling sharing, you are the worst kind of projects for giving hope and not following through. Releasing a bad alternative would have been better. Saying "oops, never mind" would have been better.
  1. There is no reason why something like an RSS reader needs to be centralized and run by a big company like Google. Let's see some competition.

Applying OpenID

I have written a fair number of webapps. They are virtually all quick and dirty things mostly for myself and immediate friends, but there are more than a few. I am pretty sick of implementing authentication.

I have two previous go-to approaches to authentication:

  1. Collect a username, password, and email address. The email address is used for password recovery/resets.
  2. Avoid collecting any user data. Generate some random value and stick it on the URL. If someone wants to return as the same person, they need to save that URL.

For my last project, I decided to take OpenID for a spin. There is already plenty written on the failings of OpenID.

But my experience was slightly difference from what I have read, so I want to elaborate.


I wanted to make my webapp, but I knew I wanted to try something like !OpenID for authentication. I say "something like," because going into this, I had no preconceptions about whether OpenID, OAuth, or some other proprietary thing I had never heard of was what I wanted. That probably should have been enough to stop me.

I decided I would be using OpenID via python-openid. As far as I can tell, the included examples are the only documentation of that library. The django example did not help me much, as I was not using django and am not very familiar with it. I focused mostly on, but it has a number of configurable parameters that only confused me, and I did not find it helpful at all that it only functioned as a standalone web server.

I fought with it for a while and eventually ended up with something which purported to authenticate me. I also found and hooked up a reasonably nice frontend for encouraging people to use well-known providers.


I eventually went on to work on actually implementing my web app and started showing it to people. Several things happened. These things surprised me. Authentication is not a place where I want to be surprised.

  1. People associate authenticating with third parties with leaking information from those third parties.
    • My app deals with RSS feeds. People authenticated with their Google accounts. People were disappointed that their Google Reader RSS feeds did not show up.
    • People do not like leaking information.

      I'm not sure why, but whenever I see "login with Facebook" or similar stuff, I tend to close the tab and move on.

  1. I misinterpreted what guarantees !OpenID was making. I may still be doing so. The biggest example of this is that Google returned a different ID depending on the realm I specified.
    • I initially had generated the realm from the HTTP request, which meant that the domain people used to get to the site and whether they included a trailing slash on the URL would cause them to authenticate as different users.
    • I experimented with moving the site to a different host, but that would have required changing the realm and therefore somehow re-authenticating everybody.


This little adventure was not all failure. It is still in operation, and people use it.

Probably the coolest thing to me is that I am storing so little personal identifying information. In general, I would say that people should care about this more than they do. I store OpenIDs. I store no email addresses. Names are optional. If someone gets access to the database, they get a list of Google OpenID URLs that are specific to my site and therefore pretty worthless.

Google tends to remember that I have logged in, so I can get into my site just by clicking on the Google provider logo.

But OpenID has not really solved any problems I have, and it has been a big hassle to integrate correctly.

Passing on Message Passing

I built a small web app recently, and one particular aspect wasted my time more than I anticipated. It may just be that I'm poorly informed of current trends, but I found all of the documentation and available advice to be poor, and the solution I eventually arrived at is probably close to the best I can get.

The general idea was to have a web front-end that could start and monitor jobs. Jobs would be arbitrary commands run on separate worker machines, which would stream output back to a central location for immediate viewing on the web site.


In several instances in the past where I wanted to have web sites with instant communication, I think I read the wikipedia articles for AJAX and Comet, and then I wrote some python scripts centered around a thing I called The code is all on this site, but it's in such a poor state that I'd rather not reference it.

The general structure looked like this:

  1. The web front-end uses jQuery to issue asynchronous requests to the web server, which are handled by a python script,
  2. establishes a socket connection to an always-running background daemon,
  3. forwards messages between clients. In some iterations, I had it always broadcast every message to all clients. In some iterations, I registered a handful of persistent clients to which all messages went and who could direct messages at transient clients.
  4. One of the clients was often, which mostly interacted with an sqlite database and notified to notify clients when things in the database changed.

I would say my biggest problems in this setup were in the database-side of things. I generally failed at having a good way of querying the database from the web frontend that was descriptive, safe, and fast. And I had to not just query the database but register for notification of certain types of updates.

I also some problems with duplicate messages (or perhaps the same message going to two clients which turned out to be different requests from the same web client). I have some ideas on how I could have simplified things and been more rigorous.

But really it was just a lot to maintain, and I couldn't believe that there wasn't something already out there for this.


I have difficulty finding things when I don't know what exactly it is that I want.

It seems that the "real-time" web is the New Thing&tm; that matches the original problem I was trying to solve.

Many of the things I found centered around Twisted, Tornado, Orbited, and Node.js. To be honest, I didn't really give any of them a fair chance. I'm not looking for a different web server - just a thing to connect my existing webapp with other things. None of them really jumped out as being capable in the ways I needed.

A common theme seemed to be "message passing." Specifically, I found a number of frameworks built around RabbitMQ. When I arrived at this point, I was feeling good. I routinely undermine libraries I'm given to be able to work at a more reliable, manageable lower level, and RabbitMQ seemed to be a common tool in this business.

RabbitMQ has a number of tutorials using a python library called pika, so I started with that. The first few tutorials were pretty good, but after that I found the documentation to be pretty awful. pika deferred to RabbitMQ. RabbitMQ documentation was absent or tautological ("exchange-name exchange Name of the exchange to bind to."). I muddled around and got a test sort of working. I got fed up when things inexplicably hung or failed to clean up after themselves as I requested. It was never clear when I needed to pump what API for things to work and which things would block.

Next I tried Kombu, which was admittedly more Pythonic but seems to be unmaintained. I ran into the same issues as with pika.

I finally went with py-ampqlib, which was a lot like pika, but somehow easier for me to manage. Or at this point I had learned enough that I was just able to manage it better than my first attempt. Regardless, I got my simple test case basically working.

  1. I clicked a button on a web site. The web site sent a request to some script.
  2. That script set up a new "response" exchange for itself, bound a queue to it, and sent a message to a well-known RabbitMQ exchange informing it of a "job" and the exchange to which to respond.
  3. There was a script listening to a queue bound to that exchange. It pulled the job off the queue and feigned doing some work.
  4. The script sent a series of messages back to the "response" exchange.
  5. The web site hit a script which hit RabbitMQ to stream back the messages output from the "response" queue, blocking when appropriate to wait for the next message.

At this point, I appreciated that there was something of a standard for this business. There was no apparent chance of a message going the wrong client, needlessly broadcasting messages to all clients, or bookkeeping on my part to make sure that I only process each message once.

But at the same time, this was super fragile. I had some of the same problems as with pika and some new ones. Ensuring that exchanges were declared at the right time and had queues bound to them so that a worker could start feeding messages in before the web client was ready to start reading them felt as thought it were as complicated as just doing what I did in dbserver. And I hadn't even done the second part of this, which was going to involve creating another process which listened to all exchanges to log into a database, so that I could view logs after the jobs were finished.

Operating Systems

My takeaway was that I didn't want queues to be centralized. At least in my use case, I wanted to have one authoritative copy of the streaming output, and I wanted each client to simply keep track of how much of the output it had seen. I never had to worry about cleaning up anything like queues or exchanges, because the only state that was needed was an offset on the client.

The solution was incredibly simple.

  1. I clicked a button on a web site. It started a "job" process immediately.
  2. The worker wrote to a file.
  3. The web site made a request to a script which read from the file, starting from an offset provided by the client.
    • If there was more data there, it returned it.
    • If there was no data and the worker was still running, it called select(), which blocked until there was more data available.

So after all of that, I had written a web based /usr/bin/tail.


I'm not certain what I learned from this.

And in the end, I think I'm aborting the project that caused me to go down this path.

I expect in the future I will probably find use for all of these approaches.


This happened:

<prox> cory: 2001:48c8:1:2::8000 is calling to you... USE ME, USE ME!

So now there's this:

$ host -t AAAA has IPv6 address 2001:48c8:1:2::8000

$ netstat -ln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp6       0      0 :::80                   :::*                    LISTEN

...and now this web site should be accessible via IPv6.


I just cleaned up about a dozen new users and spam pages on the NTRW MediaWiki.

It feels like I'm not handling it right. Each time I see spam, I disable the new users, usually for a year. Then I delete any new pages I created and rollback any edits they made. This is in line with the typical steps for dealing with vandalism documented for MediaWiki. What bothers me is that it seems like I should just be deleting the offending users rather than blocking them, since they're going to clutter up the legitimate user list. The username, page name, and anything else in the deletion logs might all contribute to some bizarre Google bombing scheme, I would have no idea, and these steps would do nothing to help that. I would also love to just pick a range of time and obliterate it from the wiki's history. In this case, only vandalism happened in the last week. I suppose everything built to handle spam is more suited for Wikipedia's level of change throughput where good and bad changes are constantly intermingled.

Each time this happens, I look for more MediaWiki extensions to prevent this in the future. This time I switched to Google's !reCAPTCHA extension. I also ran SpamBlacklist's cleanup script, which unfortunately identified at least one valid page as spam. I failed to get its whitelist to work for me, but I didn't really care about the offending link, so I just removed the link.

Then I checked external links for spammy links and was surprised to see none jump out at me, though many are quite stale/broken.

Am I missing something?

App Store Store

I thought it would be entertaining to make a storefront for app stores. I figure there should be an app store store where users can purchase, rate, and comment on app stores. I just spent a few minutes collecting icons and links to web pages for a handful of app stores, and I am stopping there for now, because I'm so disgusted.

  • Everybody has an app store. I imagine I've hardly scratched the surface of what's out there. Several companies have several completely separate-seeming storefronts for different devices.
  • Almost across the board, web frontends seem to be an afterthought. I'm supposed to run custom Windows software to even see what apps are available for some platforms. Some app store web interfaces only list a small selection of featured apps.
  • The names are terrible. It's like companies look up "place to give us money" in a thesaurus and trademark the first result that isn't taken.

Anyway, feel free to browse our fine selection of app stores below:

App Store Store

/share/appstorestore/android_market.png /share/appstorestore/blackberry_app_world.png /share/appstorestore/chrome_web_store.png /share/appstorestore/dsiware.png /share/appstorestore/iphone_app_store.png /share/appstorestore/mac_app_store.png /share/appstorestore/palm_app_catalog.png /share/appstorestore/playstation_network.png /share/appstorestore/steam.png /share/appstorestore/wiiware.png /share/appstorestore/windows_phone_apps_marketplace.png /share/appstorestore/xbox_live_marketplace.png /share/appstorestore/zune_marketplace.png

This just in: Wikipedia to the rescue.

  • Posted: 2011-01-09 21:33 (Updated: 2011-01-09 21:39)
  • Categories: internet
  • Comments (1)

Saratoga Property

The trouble I had finding a place to simply put in a kayak anywhere near Saratoga Lake got me thinking.

Kaydeross Park

First, I was confused about "Kaydeross Park" on Google Maps:

View Larger Map

I haven't found a huge amount of information it, but scraps of information indicate that Kaydeross Park was at one point an amusement park containing a carousel by the lake. Now it appears to be the private recreation facilities of the adjacent townhouses.

For the first time, I clicked that "Report a problem" link in Google Maps and explained the situation.

I believe Kaydeross Park is mislabeled. This area appears to currently be recreational facilities for the nearby townhouses. I went there expecting it to be a park from which I could launch a kayak, but instead it had only gated access for residents. A sign facing the lake labels it as "Water's Edge." I'm having trouble finding all of the pieces of its history, but it looks like "Kaydeross Park" used to be an amusement park. In 1987 it was bought by the town, and now it is townhouses.

Four days later I received a reply.

Your Google Maps problem report has been reviewed, and you were right! We'll update the map soon and email you when you can see the change.

That was over two weeks ago and it still hasn't changed, but that was more of a response than I honestly expected.

Abandoned Restaurant

The abandoned restaurant that looked so promising yet remained gated off was my second curiosity. Off I went to the Saratoga County web site and asked who owned the property, what had been there previously, and why it was not being used.

The second business day after I sent my questions, the Saratoga County Deputy County Administrator responded with some raw property information and the following comment:

FYI this parcel was purchased by the City of Saratoga Springs several years ago to develop into a waterfront park. That initiative has been stalled due to fiscal constraints within the City

I was happy with his response, happy that there was at one time a plan to develop it into a waterfront park, but I interpret this extremely pessimistically. With the state boat launch at the north end of the lake, there's really no need for another park with a parking fee, and I expect residents are against welcoming visitors.

Back to Kaydeross Park

Putting the two together, I used this which was recommended by the Deputy County Administrator to learn about what used to be Kaydeross Park. Not especially useful, but now I know I can curse The Vista on Saratoga for maintaining inviting-looking property.

Assessed Land Value58600
Deed Book1245
Deed Page128
Frontage (Ft)0
Owner Address150 Kaydeross Park Rd
Owner CitySaratoga Springs
Owner NameThe Vista On Saratoga
Owner StateNY
Owner ZIP12866
Print Key192.-1-41.2
Property AddressKaydeross Park Rd Rear
School DistrictSaratoga Springs Csd
Assessed Total Value220000

Without Internet

This week, for the third time in only a few months, I found myself without Internet access at home.

The first time, it went like this:

  • The cable modem indicates no connection.
  • I call Time Warner Cable.
  • They tell me they shut me off, because of "leakage."
  • They scheduled someone to come check things out (but they couldn't come for quite a few days...frustrating).
  • They plug some measuring device into my cable, which indicates a small but apparently acceptable amount of noise.
  • They remove the ground from my cable connection.
  • They repeat, and the amount of noise remains the same.
  • They reconnect everything, turn me back on, and figure that I was disconnected by mistake.

The second time was my fault:

  • HTTP requests were being redirected to a "your computer has been causing trouble, we're disabling you" page.
  • A local user account with an apparently easily guessable password on one of my computers had been compromised and was running a number of sketchy-looking scripts.
  • I wiped and did a fresh install of the affected machine.
  • I should add that I had to call Time Warner Cable to sort things out, and they turned me back on before even asking whether I had the machine under control, though they did threaten disabling me for longer if it happens again.

The third time went like this:

  • Cable modem reports no connection.
  • I call Time Warner Cable.
  • They ask me to try power cycle the cable modem, fiddle with the cable.
  • No, that doesn't improve anything.
  • They schedule an appointment.
  • The guy comes out, plugs in his meter, looks at it for a few minutes.
  • "Your signal's perfect."
  • He replaces my cable modem, and everything is fine.

It's slightly less annoying now that I have a Droid Incredible with an unlimited data plan.