tag:shadowfiend.posthaven.com,2013:/posts Chasing the Best 2016-11-21T13:07:15Z Antonio Salazar Cardozo tag:shadowfiend.posthaven.com,2013:Post/579279 2013-05-16T12:34:20Z 2014-06-09T12:11:39Z Using Commit Messages for Documentation

A consistent problem in software development is reading code. Reading code, like reading a book, consists of a few separate levels of understanding:

  • What does the code say?
  • What does the code mean?
  • Why was that way of expressing that meaning chosen?
  • How do I interact with the code?[1]

These are roughly in reverse order of specificity: “what does the code say” is the most specific and least informative question you can ask, while “how do I interact with the code” is one of the most high-level questions you can ask, and may not necessarily require a detailed understanding of the answers to the first three questions.

To understand what code says, we can simply read the code. That is the most fundamental level of understanding. Often, meaning is just as easy to glean. As when a book describes a character as “tall”, there's typically nothing more to it, so most code is initially fairly easy to grasp in terms of meaning. Notably, the more you grow as a developer and the more code and advice and such you read and write, the more code will fit into the category of easily understood meaning. Certain patterns that can be difficult to grasp the first or second time may become second nature over time, to where a simple glance at a piece of code quickly tells you exactly what pattern you're looking at and what it's meant to accomplish.

Properly written code should be written to expose what it says and what it means as clearly as possible. That's why we adopt relatively short functions, descriptive function names, etc. But sometimes even at that point, it's difficult to elucidate the meaning of code. That's when we apply comments. The key is that writing a comment should never describe (as is often preached) *what a block of code says*, but rather more *how* or *why* it does what it's doing—its meaning.

The problem with comments, of course, is making sure they're up-to-date. You don't want someone to change the code but leave the comment alone. This is the perennial documentation problem, and it's a hard one to solve. Part of the solution involves avoiding comments when possible—relying instead on clear code and well-named functions and variables to convey as much of the meaning as possible. When comments are very high-level, typically you only need to update them if you rip out an entire chunk of code, when you should hopefully be more likely to remember that the comment is associated directly with that block.

There is, however, another solution. Really, it's complementary rather than a replacement to using comments. That solution comes in the form of commit messages. Assuming you're properly using atomic commits, your commit messages can contain the answer to *why* a particular implementation or algorithm was chosen. A comment could grow outdated, but a commit message won't because, if a given bit of code is replaced, a new commit will be created, and that message will describe the reason why the second change was made.

Using solely commit messages for the whys and wherefores is probably not the best solution in every situation. There's question of immediacy: if the information you want to provide is something that you're likely to need every time you go over a chunk of code, maybe a comment is the better place to put it. But if you're trying to explain reasoning for a chunk of code that's hidden behind a good, descriptive function name, perhaps a commit message is a better place to put that, since if someone's reading the function they're probably about to make changes to it, and they'll have the time to invest in looking for your message as part of the process of understanding the meaning and purpose of the code. 

Commit messages can probably form a large part of documentation if used properly, and perhaps if tools come to fruition that surface them more easily. One can imagine a version of reverse literate programming where the actual documentation exists in commit messages, and a tool that stitches the messages and code back together to create a coherent whole. I've played with the idea of structuring tutorials with tutorial contents in commit messages associated with the diffs of their commits—each step in the tutorial would be explained and described by its commit message. The project isn't quite usable, but it's an interesting experiment in using git features themselves for blogging, rather than just using git as a content repository for a blog.

Regardless of how hard you lean on commit messages as exclusive expositions of meaning and motivation, I think it's a good idea to write a good commit message that includes some of this information. It's somewhat redundant, yes, but commit messages are the quickest way to track the evolution of a file or block of code (or project). That makes them an ideal place to document those kinds of thoughts in a way that can be reviewed in bulk later when attempting to gain broader understanding.

I'd love to hear thoughts on this strategy, and perhaps alternate suggestions or other interesting ideas on how to deal with documentation getting outdated, and on how to leverage source commits as more than just a giant undo button (which just seems like a waste of a massively powerful tool).


[1]—There are more levels of understanding, of course, especially “what does running this code change?” Functional programming advocates want to obviate that question altogether by making the answer always be “nothing” ;) Arguably, understanding what code says and what it means is sufficient to understanding what it changes, as well.

]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212701 2012-01-16T19:04:00Z 2013-10-08T16:06:21Z Avoid Tables; Take the Stairs

I've recently seen an uptick in the number of people who consider the argument “tables are bad” to be mere dogma. Maybe there is some element of that; I don't disagree. It gets particularly ridiculous when people follow up with “well, don't use display: table/table-row/table-cell, either”. So now we're not making a semantic argument, which makes it a lot tougher to justify. There are still flowing arguments and any number of other things, but the position weakens.

I'd like to propose that avoiding table layouts is really the development equivalent of taking the stairs. When people talk about small shifts that can counter a sedentary lifestyle, they talk about things you usually don't think about: take the stairs up to and down from the office; park further out at the store; drink extra water so you have to go to the restroom periodically. All of these things force your hand a little, they make you stretch your legs, and they insert small breaks in your day that you can use to let your mind wander a bit.

This latter bit is a secondary advantage, not the primary purpose. But it's still powerful. If you don't whip out your phone whenever you take your short breaks or whenever you're heading up the stairs, it's an opportunity for your mind to get a bit creative. Maybe you explore a different set of stairs each day. Maybe you check out the restrooms on other floors, out of curiosity (depending on the office building, every floor may be the same, of course…). It's not that using the elevator isn't possible or necessarily introduces, every time you use it, a huge problem. But by using the elevator every time, you are coincidentally also removing opportunities to think and explore.

Every time I run into a problem and the thought occurs “man, tables… They would make things so much easier here…”, it's a challenge. If you've been developing for a while, challenges become rarer—that is the nature of the beast. Most challenges end up translating to experimentation with new features. Here, though, we have an opportunity to explore existing features in a deeper way. You discover properties of certain layouts (inline-block, floats, etc) that you didn't know they had. You gain a deeper understanding of the ones you already knew they had. It's a treasure trove!

When I TAed an intro to object oriented programming class, I jokingly argued that a lot of the features in OO that make things easier for a developer stem from the fundamental truth that programmers are lazy. There are some definite truths in that claim. “Laziness” often behooves us as developers because the right kind of laziness leads to better architectures, and it also leads us to avoid premature optimization and a number of other problematic practices. Laziness done wrong, however, also means never growing. If you're too lazy to go outside, you never explore the outside world.

As a developer, you want to be lazy in terms of expanding your code base, but not lazy in terms of exploring new solutions. And building in a wall or two here or there that forces you to explore new solutions periodically is a great way to make sure you don't accidentally start standing still. Dogma or not, avoiding tables for layout gives us a chance to explore some fascinating alternate properties in CSS.
For example, I don't know if the fact that overflow: hidden clears internal floats would necessarily have been discovered without the push to leave tables. Even if it had, who knows if it would be as widely-known. And this is a technique that isn't simply a way to clear floats, it gives you an incrementally better understanding of how overflow properties work compared to what you had before. It gives you new knowledge that is tangential to the actual problem you're trying to solve.

Avoiding table layouts may be dogma, but I'm going to keep doing it. Every time I get that rush of having solved a particularly gnarly float-inline-block interaction to produce the layout I was looking for, I know I've learned just a little bit more of how these things work together. And I'll be looking for more such roadblocks—ideally these should be relatively small, and should only present themselves as formidable obstacles rarely—in other places, to experiment and see where they are tolerable, where they are terrible, and where they give me the most opportunity to learn. I will, in short, be trying to take the stairs a bit more often.]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212702 2012-01-12T16:41:00Z 2013-10-08T16:06:21Z Guest Post: Learning to program is hard, but languages are not to blame.

Recently I had a brief discussion on Twitter about the ease of use of programming languages. The points I wanted to make warranted more space than 140 characters would allow, so with thanks to Antonio for the blog space, here they are.

For decades, the holy grail of programming languages has been to create a language that is as simple to adopt as possible, while providing as vast an array of functionality as imaginable. Every new language boasts of its ease of use: "See? You can do a network call and pass the result to your view in one line!" Yet beginners still feel that programming is a black art, and many people, some experts included, feel that programming languages are to blame. I disagree.

In a perfect world, programming is as simple as talking to the computer, which interprets the commands and does what the author means. Unfortunately this requires superhuman intelligence on the computer's part: not only does it need to interpret your language of choice, resolve any ambiguity arising due to your accent, and set down what you said, it also needs to resolve any ambiguity arising due to your choice of words, catch contradictions in your instructions, ask you to resolve them, and so on. It's so complicated that human beings can't do it right: specifying requirements is one of the most complicated and error-prone processes imaginable. So the day is still far off when programming can be reduced to writing a requirements spec. Until then, we are left with using actual programming languages, which can be quite daunting.

Let's consider designing a language with user experience in mind, by which we mean that beginners would feel right at home in it (this is not the only user experience, but let's pretend it is). What would such a beast look like? Already we have languages like Ruby and Python that can look pretty much like English. If those are too difficult to understand, then clearly bringing the language closer to English doesn't help much.

We also have languages that try to help by working very hard to ensure that a certain type of consistency exists in the code; Haskell, for instance, ensures that you are working with whatever data you claim to want to work with, and that you do not change it in a uncontrolled manner. Yet Haskell is considered a difficult language to master, because in order to provide this functionality, it introduces complex mathematical abstractions that the user must thoroughly understand. Clearly, helping the user avoid errors (even if only of a restricted kind) is not a bonanza, either.

What's left? Really, only one thing: providing pre-packaged solutions for known problems, aka, libraries. And we have them, for every language: it has probably been years since a working programmer has had to write network socket calls—some generous and knowledgeable souls wrote the code, tested it thoroughly, and then donated it to the community. And libraries like jQuery take things that once were thought horrendously complex and make them trivial. This helps a lot—but now it adds a new form of complexity: the libraries one must be familiar with. Again, a barrier to entry.

Let me posit, to explain this complexity whack-a-mole, that learning programming is hard not because of the languages are difficult, but because programming is difficult. Why? Because it solves difficult problems.

To be sure, some languages are easier to learn than others, up to a point, but that is a very superficial ease. If your goal is to write a program that says "Hello World" on the console, half a dozen languages let you do that in one line of code—can't get much simpler than that. But if you want your browser to take a string, interpret it as a layout, add reactions to it based on user interaction, fetch data, and make the whole shebang operate as if the user were in a desktop application while making asynchronous calls to the server...well, of course it's going to be hard. No language in the world will help you because this is not a simple problem.

If you're starting to learn programming and find it hard, good! You're not deluded: it is hard. Of course, I encourage you to give yourself the benefit of the doubt when your gut instinct says "this should be easier". And if you have a way to make it easier, write it! Publish it! We will love you for it. But the truth is, if it's harder than it seems it should be, it's probably because it's not easy to simplify. Because even though it looks like a simple concept, endless edge cases will bring it in conflict with other parts of the system in unpredictable ways. Because we have not yet reached that level of abstraction.

This is really the crux of the matter: abstraction. Once upon a time, programming was literally guiding bits through a processor, with punch cards. Now we get to tell browsers (browsers!!) to layout boxes of content, to create gradients, to display specific fonts, or even to use tables (but don't do that; we keep it civilized). Whatever problem you think could be solved by a better language probably can't, but it probably can be solved by a better abstraction, which one day might make it into a language. The languages we have today reflect the level of abstraction we have gotten to. They are our report card.

So, yes, we should try to improve our languages. But let me dispel an illusion right now: that will not make programming easier or more accessible. Because when, after blood and sweat and tears, doing X is finally easy, we will move on to wanting Y, which should be so easy, beginners could do it—if only these darned languages didn't stand in the way.

--

You can follow Alexandros on Twitter as @nomothetis.

]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212703 2011-12-22T14:17:00Z 2013-10-08T16:06:21Z The pendulum (or why you shouldn't be despairing over SOPA)

It seems that, at least in the geek community, pessimism is the norm. Perhaps this is rooted in a certain difficulty understanding others outside the community. Perhaps it is simply arrogance. Regardless, it's there. And it is, particularly when it comes to politics, likely unfounded.

My intent isn't really to exhort all political pessimists to stop. Rather, I ask that perhaps you consider your fatalistic interpretations in a greater context: that of a swinging pendulum. One fixed to a string, the string slung across an infinite bar. The pendulum swings back, then forward, carrying just enough momentum to drag the string forward an inch. Then back again, and forward, the string pulled another inch. This is progress. 

Progress is not this image we frequently conjure up of step after bold step, strong, steady inexorably moving forward without a pause or waver of uncertainty. The past, in its simplest form, often looks this way. We lived in caves, now we live in cities. We cringed from lions, now we show them off in zoos, captivity our ultimate victory. But in the middle are the merciless swings of the pendulum. Forward—the Reformation. Backwards—the rigid religiosity of all Christian denominations that result, a defensive reflex. Forward—the New World. Backwards—brutalizing the natives, wars over every inch of this world. Forward—mastering the machine, industrialization. Backwards—two massive World Wars.

Not everything moves forward at once, and backwards movement is likewise not synchronized. Some swings take decades, some centuries. But the swing is ever there. The push back against progress is almost as strong as the progress itself—almost. And those pushing for change are always those who feel it. The ones who see and feel the setbacks. But they are also the ones who cannot give up. The ones who must not. Progress is only the stronger force because they make it so.

At times, by its very nature, progress is foisted upon the masses by the few. Then, it is foisted upon the remaining few by the masses. Computing had a bit of a battle to get integrated, considering the jobs it eliminated and the learning curve it entailed. Now it's here, foisted upon the masses by the few engineers and marketers who showed up first. But, the law has to catch up. SOPA, PIPA, all of these laws, are the flip side: the masses have to foist this progress on the few who are still resisting it. And foist we shall.

To those who fight these fights: do not despair that you must fight them. This is the way of things. We take a leap, then everyone has to be dragged to catch up. Then we take another leap. And more importantly: do not stop fighting. The fight is why progress wins, every time.

In the 1960s and '70s, South America was embroiled in a series of democratic pitches to the left, countered by a series of military coups that lurched them back to the right. The pendulum was swinging at its very wildest. The advocates of progress, those who fought for the rights of the disenfranchised and injusticed, were, as is often the case, the artists and the singers. One song that emerged from those tumultuous years continues to be a mantra to movements both in the region and around the world, and to this day remains ingrained in the memories of my parents and, through them, my own: “El pueblo unido, jamás será vencido”: the People, united, shall never be defeated.

SOPA and PIPA are fighting against us all. We may win next year, five years from now, or two decades hence. But the People, united, shall never be defeated.]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212704 2011-12-15T19:13:00Z 2015-08-30T05:23:35Z Unexpected NullPointerExceptions in Lift Production Applications
TLDR: NPEs/InvocationTargetExceptions happen when either the servlet container or the reverse proxy starts discarding requests associated with suspended continuations, as used by Lift for pending comet requests. This happens presumably when the container or proxy reaches a certain threshold of unprocessed requests and starts discarding old requests to keep up with new ones.

About eight months ago, when OpenStudy first started having decent amounts of online users at the same time interacting with the site, we started occasionally seeing NullPointerExceptions in Lift. These were very strange, stemming from attempting to access the server name or port, and seemed to happen mostly in comet or ajax requests. There has been a thread regarding jetty and these issues on the Lift mailing list (https://groups.google.com/d/msg/liftweb/x1SIveK_bK0/asHehgvGXe8J) for a while, so we decided to make the move to Glassfish.

Some people had mentioned jetty hightide didn't have the problem, but we tried it and ran into the same issue. Upon moving to Glassfish, we found other problems. Glassfish 3.1 leaked file descriptors on our server, so it needed to be periodically kicked. Deeming that unacceptable, we switched back to jetty 6 and decided to absorb the NPE issues until we had a chance to look deeper at the cause. Around this time, we also switched to a bigger EC2 instance to defray some load issues without scaling out quite yet. Jetty 6 stayed a lot quieter this time.

The issues seemed to have been tied to load from the very beginning, but the key that led to our conclusions was a bug that sneaked into one of our releases that caused a rapidly-increasing internal load to build up (this was a synchronization error in our actor registry that led to actors not being spawned properly). The reason this proved to be the key is because, once it produced the NPEs and we failed to track down the bug quickly, we decided to try the other servlet containers until we could sort things out.

We went through Glassfish again, then Jetty 7, Jetty 8, and finally Tomcat. Each and every one of these exhibited similar behavior and exceptions. The only difference that arose is that Tomcat finally gave us a useful underlying cause for the exception:

java.lang.reflect.InvocationTargetException: null
        at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source) ~[na:na]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) ~[na:1.6.0_26]
        at java.lang.reflect.Method.invoke(Method.java:597) ~[na:1.6.0_26]
        at net.liftweb.http.provider.servlet.containers.Servlet30AsyncProvider.resume(Servlet30AsyncProvider.scala:102) ~[lift-webkit_2.8.1-2.4-M4.jar:2.4-M4]
[…]
Caused by: java.lang.IllegalStateException: The request associated with the AsyncContext has already completed processing.
        at org.apache.catalina.core.AsyncContextImpl.check(AsyncContextImpl.java:438) ~[catalina.jar:7.0.23]
        at org.apache.catalina.core.AsyncContextImpl.getResponse(AsyncContextImpl.java:196) ~[catalina.jar:7.0.23]
        ... 22 common frames omitted

Finally, the underlying IllegalStateException gave us the clue we needed to figure out what exactly had happened: the servlet containers are killing off the requests before the relevant continuation is woken up to deal with said request. Jetty 7/8 gave something similar, though much less clear:

java.lang.reflect.InvocationTargetException: null
        at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) ~[na:na]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) ~[na:1.6.0_26]
        at java.lang.reflect.Method.invoke(Method.java:597) ~[na:1.6.0_26]
        at net.liftweb.http.provider.servlet.containers.Servlet30AsyncProvider.resume(Servlet30AsyncProvider.scala:106) ~[lift-webkit_2.8.1-2.4-M4.jar:2.4-M4]
        at net.liftweb.http.provider.servlet.HTTPRequestServlet.resume(HTTPRequestServlet.scala:163) ~[lift-webkit_2.8.1-2.4-M4.jar:2.4-M4]
[…]
Caused by: java.lang.IllegalStateException: IDLE,initial
        at org.eclipse.jetty.server.AsyncContinuation.complete(AsyncContinuation.java:534) ~[na:na]
        ... 22 common frames omitted

All indications point to the problem being the same in the case of Jetty 6, since the behavior manifested in the same situation. We did fix our own internal bug, but shared this valuable information as to the cause of the null exceptions with David Pollak.

We've filed a ticket on Lift itself for a more informative error message in this case. That said, generally speaking, if you see this, it means there's a load issue somewhere along the application pipeline that you need to look at. Requests are dying, and you need to figure out how to deal with that. At the moment, we haven't gotten a chance to investigate whether it's the container or nginx terminating the request early, but we'll be looking into it if and when the problem shows up on our system again.
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212705 2011-11-14T22:11:06Z 2013-10-08T16:06:21Z Stop SOPA, Save the Internet http://boingboing.net/2011/11/11/stop-sopa-save-the-internet.html

I can't think of anyone who should not be opposed to this. On Wednesday, November 16th, please call your representative. The Internet is the greatest haven of free speech mankind has ever seen, and groups like the RIAA and MPAA want to close it off. Yes, this is worse than the bailout if you disagreed with the bailout. Yes, this is worse than the war in Iraq if you disagreed with that. Yes, this is worse than Occupy Wall Street or the Tea Party, depending on where you stand. Yes, this is worse than the debt limit debate.

This has nothing to do with piracy. This has to do with free speech. Share it, call your representatives, and let's stop this bill from getting passed. We are a bastion of free speech, but that bastion is cracking. Let's shore it up.]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212706 2011-11-06T18:28:08Z 2013-10-08T16:06:21Z Notes on deploying Lift apps to Glassfish
I just wrapped up switching our deployment of OpenStudy from Jetty to Glassfish. We learned a couple of things, but the most important one has to do with non-Jetty continuations support. The default web.xml that ships with Lift sample apps and such unfortunately does not include a required parameter, and also doesn't seem to be in the right schema, to enable Glassfish continuations. This caused us some serious head-scratching (and a lot of tied-up threads), since very few places deign it necessary to tell you that you need to have this setting in your web.xml. 

To fix up your web.xml, you'll want to drop the doctype and change the web-app tag to:

This indicates you're using a more recent version of the schema, which lets you specify you want async (continuation) support. Part 2 is updating the filter declaration that sets up the Lift filter and servlet. You'll want to drop the description and display-name tags in the filter, since they no longer seem to be supported. Most critically, you need to add

<async-supported>true</async-supported>

This lets the container (Glassfish in our case) know that we want continuations for our app. If you're wondering why we're telling the container we support continuations rather than the container telling us that it supports them, I didn't find a proper explanation of that. All evidence points to this being a requirement if you want a Servlet 3.0 container to enable continuations support.

The other thing we quickly noticed is that Glassfish seems to use up more file descriptors than jetty does as it manages similar load. This seems like it may be related to the bug at http://java.net/jira/browse/GLASSFISH-11539, though that bug is marked fixed and we're running Glassfish 3.1.1, which is well past the 3.0.1 fix release. Nonetheless, we find the file descriptors leveling out around 34000, so as a stopgap we simply go to /etc/security/limits.conf and add:

@admin soft nofile 10000 @admin hard nofile 65535

This bumps up the file descriptor limit for processes run by users in the admin group, which our glassfish process user is.

So far things have been going well; we switched to Glassfish to avoid the NullPointerExceptions that jetty starts giving us under load (more at https://groups.google.com/d/topic/liftweb/MeJnuk-lH9A/discussion). So far, we haven't seen a similar instance with Glassfish, but it's early days yet. Fingers crossed!
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212707 2011-07-18T14:22:00Z 2016-11-21T13:07:15Z Free your popup arrows (from images, mostly)
I've always been a fan of doing things without images when possible. There isn't always a clear reason for it; the classical reasoning is that it saves bandwidth and HTTP requests etc etc, but to be honest my main goal is a challenge. Yes, we often derive the aforementioned benefits, but really it's just fun to try and figure out how to do things that *should* be doable without images in straight HTML+CSS+JavaScript. To that end, today's little project is creating a popup with an arrow pointing back to something coming out of one of the borders. Something that looks a little like this:

Sexy right? That little left arrow is the challenge. First off, some markup:

This is more or less lifted directly from the OpenStudy code base. I'm not going to bother detailing how the owl is taken care of; instead, I'll focus on the borders, drop shadow, and most importantly the left arrow.

Notice that there is a class `left-arrow' on the div. This is because we want to create the system so that it has relatively flexible positioning. On OpenStudy, we use left-arrow, right-arrow, bottom-left-arrow, and top-arrow. For the purposes of this post, we'll focus on left-arrow and right-arrow and leave the rest for tinkering purposes.

First off, you'll notice that there is no element dedicated to the arrow. Unfortunately, this isn't because it isn't necessary; I just like to have markup that is devoid of empty elements that are strictly presentational. When I do need them, as in this case, I put them in after the fact using JavaScript. In this case, when a tooltip is loaded via AJAX, the loading process adds a new div with class `arrow':
This is nearly the end: what remains is taking that div and turning it into the arrow we see above. To do this, we take advantage of CSS transformations. These let us, for example, rotate an element. Here, we will give the div an equal width and height and then rotate it 45 degrees so that it is a diamond instead of a square:

https://gist.github.com/1089727/4db7f762a00d2e8d968cb1df621c0f07f994084d

Here, we apply the 45 degree rotation and set the equal width and height. So far so good. We have a diamond on a container; if we give it a background, we see:

Now we'll give the popup itself just a little bit of styling:

https://gist.github.com/1089727/88342e10db731132e595ca13a5744233e9379d4c

And then we position the arrow absolutely within the containing div and match the background color:

https://gist.github.com/1089727/f17fd009aa3e018cbec630e369df6db49dbe1b1c

The result looks like this:

Two Sides

Let's remember that we wanted to do both left and right arrows, however. We can move the specific positioning stuff to its own class, and add a version for right arrows:

https://gist.github.com/1089727/1e16a91618451883f05b60bd50ee97daf4bb0b63

If we add a version of the div in the markup with the right-arrow class instead of the left-arrow class, we get:

Brilliant!

Fill That Up With Rounded Shadowy Sexiness

Last but not least, we want to add some rounded borders and drop shadows. We'll continue with the CSS3 trend here and use border-radius and box-shadow to provide these. First off, for the main dialog:

https://gist.github.com/1089727/046199815ad28c034a7ee6cbf0901aa4911820b6

That's Real Close Now (tm), but not quite there. If we look at what that produces:

We can see here that the arrows are ugly because they're missing a drop shadow. But now that the arrows are themselves elements, we can make use of box-shadow to give them shadows, too! This was in fact one of the main motivators for making these arrows via CSS and divs: images with drop shadows don't blend well with box-shadowed divs, unless you're very careful about creating the blend area between the box-shadowed div and the arrow image. Instead, we take this approach. Adding the box-shadows to the arrows (note that they are subtly different for the right vs left arrows):

https://gist.github.com/1089727/69e1d8309f8246df007604186a2220c2a64b2eff

Finally, we have this look:

That looks super-sexy.

They Will Not Stop Degrading Us

Sexy though the above is, we do have to think about Lesser Browsers (tm). To this end, at OpenStudy, we degraded our approach to use arrow images without a drop shadow and non-rounded boxes with no drop shadow, as well. We use modernizr, so we vary the CSS styles based on whether CSS transforms are available. If CSS transforms are not available, tough, you get no drop shadows:

The non-drop-shadowed version looks like this now:

Note that the border radius declarations are there whether or not CSS transforms are enabled. On a browser that doesn't support them (*cough* IE7 *cough*), the borders will be sharp, but the arrow will remain. The two arrow images (which could be sprited, but aren't)? Here they are:

Check Dat

That's it. Obviously many of the styles in the popup show in the initial screenshot are missing. The close button is unstyled, the text is generally untreated, and there's no proper close X. But the styles presented here are focused on how we can get a drop-shadowed arrow that varies its side by class name and is built without images. When degrading gracefully to other browsers, we leave the drop shadow at the door and use images for the arrows.

One missing component here is that we arguably shouldn't be using left-arrow and right-arrow are classes. It's my general philosophy that classes should be semantic, indicating what something is rather than what it looks like. In this case, we could use SASS or Stylus or Less or some such package with an appropriate mixin to use some other class or id to apply the proper styles to the arrows. But, I will leave it at that for now. Sexy arrows, maximum CSS goodness. It's all you could ask for.
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212708 2011-07-09T22:11:00Z 2013-10-08T16:06:21Z Flowdock for Design, Bugs, and General Happiness
Our team at OpenStudy has been using Flowdock for some time. It's become our chief collaboration tool for development and design, and one of our main ways to communicate news. We're a small team -- 1 full-time developer, 2 development interns, 1 QA intern, 1 full-time designer, 1 CEO and 1 marketing manager. We keep Flowdock oriented towards development and forward motion; information about market analysis, business development, and competition generally is kept on Yammer.

Flowdock gives us one place with everything we need, and has been a huge boon to team communications. First and foremost, we use it for team chat. But in this case, Flowdock doesn't really offer much over an IRC room and a decent client, or something of that general sort. There are some small niceties, but nothing significant. Flowdock's real power comes from the features it adds on top of chat. Indeed, the one feature that gives us the largest benefit after chat is the ability to send emails directly into our dock.

Before I get into the multi-faceted use we make of emails, I'll mention how we use chat. OpenStudy has an office, and all the full-time employees are there during the day. Chat is useful in two broad cases: when the development interns are able to do some work, but aren't at the office, and when we're trying to get someone's attention in the office. Like many people, we like to work with headphones on sometimes to concentrate on what we're doing (though not always -- most days at least some of us are spinning up some tracks in the OpenStudy turntable room at http://turntable.fm/openstudy_trax). Flowdock lets us mention someone's name to ping them (audibly and by a desktop notification) and let them know we need to ask them something. We don't always respond immediately; indeed, that's not the point. The point is that we have an easy way to let someone know we need them without necessarily interrupting their flow. Chat also comes in useful when we're staying home sick, for obvious reasons, or when an emergency comes up at an off hour and we need to coordinate.

Email, email, email

Email makes our Flowdock go 'round. We use email for two key flows:
  • Design feedback/assets
  • Bug tracking
 
Sometimes, design gets completed at off times when we aren't in the office -- weekends, evenings, etc. To give us a head start, our designer,Siddharth, emails the designs into Flowdock so we can get a look and start thinking about/commenting on them. Other times, he needs feedback on different variations, and sends them to the dock so we can all look on our computers at our leisure.
Sometimes feedback on variations happens on the dock, sometimes in person in the office.

The more important use is for assets/final designs. These get put on the dock, tagged #design, and we can access them whenever and wherever we are. In particular, this means Siddharth can finish his designs at will, and whatever developer is tasked with implementing them grabs the one they need and builds. If we need anything else -- for example, if the developer doesn't have time to extract assets from the PSD -- we can ping him either inline or in the general chat and ask him for it, and he sends that onto the dock in turn. We haven't been missing assets since we've started using flowdock.

Part 2 is bug tracking. Yes, I saved the most interesting part for last ;) We've tried lighthouse, we've tried github issues, and they all feel just plain too heavy-weight. Flowdock offered us an opportunity to lighten the workflow up when we signed up for it back in February; we did, and our flow has been essentially painless ever since. It's the perfect fit for us.

To file a bug, we email the dock with tag #bug (we also add other tags, such as the browsers where the bug happens, depending on the situation). The email for us ends up being main+bug@openstudy... If we have any questions, we can comment on the Influx entry and ask any clarifying questions.

The next step for us is `verification' -- when a dev commits a fix and pushes it, it's time for our QA intern, Min-hee, to double-check that the bug is fixed (beyond our own checks). At this point, we tag the bug email #verification. Usually, we also comment on the bug so Min-hee has an easy link from the main group chat to the bug.

The comment added to the email puts an entry in the right-side chat pane. This is particularly useful for older bugs.

Last but not least comes the check itself. If Min-hee finds that the bug isn't fixed yet, he removes the #verification tag and comments on the bug to let us know, including any additional details if needed. We repeat this process until the bug is actually fixed; at this point, Min-hee removes the #verification and #bug tags, and adds a #resolved tag. As a side note, we've never actually searched for resolved bugs ;) I have the #bug and #design tags favorited on my Influx, since I use them so often.

The one disadvantage to this flow is that we can't associate a git push with a bug, nor can we mark a bug `verification' with a git push. Realistically, we've found the inability to mark bugs with a push to be a minimal problem. Our flow is interrupted already to do the git push itself, and, as we've just fixed a bug (or three), we are at a natural stopping point. Since we usually have the bug open in Flowdock already, we can just pop over, add the tag, and then get back to what we're doing; my average time to complete that action is 3 seconds or so. And ultimately, though it makes me feel good and cool in lighthouse, I haven't missed the lack of association between commits and bugs with Flowdock at all.

Missing Features

Two things are the only ones that are really missing in Flowdock that make things slightly harder for us. First up, the fact that we can't search for things tagged #bug that aren't tagged #verification. This can be a minor annoyance when we're on a bug whomping spree and have to find where the next un-fixed bug lies. Fortunately, #verification is usually the last applied tag, and Flowdock lists all tags in search result pages. As such, we can still reasonably skim and see where #verification tags are missing.

If we scan the right side of the search results, we can see if there is a #verification tag.

The second issue is that we don't have a clear count of how many bugs we have. If one of the top 8 or so tags is #bug, then there is an entry on the search page that, when hovered, will show the number of bugs; however, it would be nice if saved searches had a visible counter indicating how many items match that search. On the flip side, that would be useless for the design tag, so I'm not complaining too much.

General Happiness

All in all, the two issues above are minor nitpicks. The second feature may not even be a good idea beyond for the specific use case of bug tracking in Flowdock. The net conclusion is: Flowdock rocks, and we absolutely love our team workflow using it. Whether it'll work for you is a different question, but if it sounds appealing, you should give it a shot.

]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212709 2011-02-14T04:46:00Z 2013-10-08T16:06:21Z Backwards-word, forwards-word, and kill-word in iTerm 2

I recently switched to iTerm 2, and my biggest pain point was getting back the proper behavior of Option-Left, Option-Right, and Option-Delete: move back a word, move forward a word, and delete a word. These three are standard shortcuts in every Mac app, and not having it in iTerm 2 is painful.

So, a quick description. Option-Left and Option-Right are easy -- the most common way to achieve these is by sending the control sequence <Esc>-b and <Esc>-f, or \[b and \[f. In iTerm 2, you can do that by just pulling up the session configuration (in the preferences or via Cmd-I, going to the Keyboard tab, hitting the + sign at the bottom (of the right pane), and hitting Option-Right in the keyboard shortcut box. Then select `Send escape sequence' from the action box and type f in the box below it:

You can then do the same for Option-Left and b.

Option-Del is slightly more complicated. Deleting a word is done by sending \[^h, which is the equivalent of <Esc>-<Ctrl+H>. However, you can't insert Ctrl-H directly in the dialog above. Thanks to this fine blog post on word shortcuts in iTerm, we find out that Ctrl-H is the equivalent of the character code 0x7f, which we can insert in a few ways. I'll describe the easiest for me, the other two are in the linked blog post.

First, show the character viewer from the input method menu:

Then, select the code tables entry from the `view' box:

Finally, scroll the bottom pane down to the 0070 row  and select the very last column, under F:

Then make sure the box for the character where you put the f and the b for Option-Left and -Right is selected, and click the insert button in the bottom right of the character viewer. Nothing will appear to show up in the text box, but it is there! HIt OK, and you're done!
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212710 2011-02-12T01:42:00Z 2013-10-08T16:06:21Z Node.js Experiments

So I played with Node.js for a couple of months, and we even deployed a widget for MIT's OpenCourseWare at OpenStudy with it. It was a remarkably fun experience, not the least thanks to CoffeeScript, which is a fabulously awesome syntax layer over JavaScript that gets rid of many of the lamest aspects of JS sytnax (for my money, the biggest change is the ability to abbreviate function() to ->).

But, ultimately, we ran into a few issues. The biggest one was that our real-time push strategy of choice, socket.io, ended up behaving rather poorly under the kinds of load characteristics we subjected it to. MIT's pages see thousands of hits a day (that's not ginormous, by the way -- about 3 hits per minute) and users tend to stay for a few minutes.

The first issue is that we park our site behind nginx, and nginx does not speak HTTP 1.0 when proxying. The result is that WebSocket connections can't be proxied through nginx (indeed, WebSocket proxy support is currently less than spectacular, and WebSocket has since proven to have some security issues). Nothing to worry about, socket.io falls back on alternate mechanisms on the (many) browsers that are missing WebSocket support, so we should still have been okay. Moreover, we were okay for this experiment with removing ourselves from being behind nginx.

The next issue is that the static serving aspect of Node.js is not quite fleshed out yet. We were using express and connect, which come with a middleware to gzip outgoing requests. Unfortunately, it has serious performance issues, so using it at any scale is not really a good idea. Still no problems, for the purposes of our experiment, we were willing to disable gzip.

Finally, we found ourselves in a situation where Node.js would randomly quit on us with timeout errors (this appears to be a timeout firing without an associated handler or some such). After some investigation, it appeared that this was a known issue that was difficult to fix. We tried to apply some fixes to socket.io to handle this, but we were still getting the issues. Given chronic crashing, then, we couldn't keep up the experiment. We even tried switching back from node 0.3 (which may not have been supported by socket.io) to node 0.2, but this led to strange CPU spikes we didn't really have time to experiment with.

Ultimately, we turned our attention back to Lift and the improvements made to it since the 2.0 release. Between the designer-friendly views, CSS selector binding, and the inclusion of lift-mongodb, Lift is still treating us super-nicely. The compilation wait is still annoying, but in proper style I dealt with it by getting a quad-core Core i7 iMac, which tears through it a bit faster.

This shouldn't be taken to mean that I'm saying Node.js isn't ready for primetime. Obviously there are plenty of people employing it in production and loving every minute of it. It was fantastic to be able to code in JS both server- and client-side, and going back to a dynamic language was fun and exciting after a while in the statically typed world. Then again, going back to Lift after Node was also kind of cool, so I guess it's just change that I'm a fan of.

For the most part, Node treated us well. With CoffeeScript, the syntax was awesome, and the developers who are working on Node and its libraries are the kinds of passionate, fast-paced people you expect to find around surging technology. It just didn't happen to work out for us. But! All was not wasted. I did throw together a quick library for doing something very similar to Lift's view-first, CSS-selector based transformations in Node, so look for that over the next couple of days.

ADDENDUM: The timeout issues we faced with node.js are being tracked at https://github.com/joyent/node/issues/378 , where it's been mentioned that they may be gone with Node 0.4.0.
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212711 2010-11-29T00:27:00Z 2013-10-08T16:06:21Z Errors and callbacks in Node.js's node-mongodb-native
node-mongodb-native has the usual error handling mechanism of Node.js: we pass in a callback to handle results, and the first parameter is an error that will be null if no error occurred. For example:

Most such structures in Node seem to execute the callback outside of an error context. However, in node-mongodb-native, the toArray and nextObject callbacks (and by extension a couple of others) are wrapped in a try-catch that triggers the error callback. What does this mean? If your callback throws an exception, the same callback will be invoked with the error parameter set. Thus your callback may be invoked twice: once with successful data, and once with an error corresponding to the exception thrown within your own callback.

So if you're seeing some strange double-invocations of your callback, that's probably what's cracking.

UPDATE: See issue 81 on the node-mongodb-driver github repository to track the resolution of this issue.
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212712 2010-11-11T23:58:44Z 2013-10-08T16:06:21Z Er... This is an M&M?

]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212713 2010-11-09T04:34:00Z 2013-10-08T16:06:22Z OpenStudy's Latest Features

We pushed several new features to OpenStudy today, and I wanted to give a brief rundown of a couple of them really quick.

Public Access

First off is what we call `public access'. Almost all content on the site is now accessible without logging on to OpenStudy. For a few months now, since we first started our beta, OpenStudy has been a login-only site, where login is required to view any of the study groups, studypads, or users on the site. When we were in closed beta, this made sense, and in the run up to our open beta we had a ton of other features we wanted to get in for our users to play with. Now, it's time to throw the doors open even further.

When you aren't logged in, you will see a banner on every page:

You can interact with the page as usual, or click any of these buttons to log in, sign up, or connect with facebook. Moreover, if you want to join a study group or follow a studypad or user, you can go ahead and click the appropriate buttons and you will be prompted to log in before the action is taken.

Facebook Like and Tweet Buttons

With public access also come facebook like and tweet buttons. Now, if you are on a study group or studypad, you can like it on facebook or tweet it to your followers. Just look for the two buttons in the sidebar!

Everyone who sees the links will be able to get to the page and have a look around before deciding whether they want to sign up or not!

Help

We've been working for a while on some great help videos to explain some of the features of OpenStudy and how they can be used. Starting today, you'll find a link to the help videos in the OpenStudy header on any page:

Click on the link to view our 6 videos about OpenStudy (with full HTML5 video support, thanks to Vimeo).

We're always working on cool new stuff, so keep an eye out for the next batch of new features!
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212714 2010-11-09T04:16:00Z 2013-10-08T16:06:22Z A Passing Note About Facebook Like Buttons

In Facebook's documentation of the like button, they mention that the like button tag needs an href attribute that specifies the URL you are liking. A little further down, in the section on Open Graph tags, when discussing the og:url Open Graph tag, it says:

og:url - The canonical, permanent URL of the page representing the entity. When you use Open Graph tags, the Like button posts a link to the og:url instead of the URL in the Like button code.

Now, the problem here is that the iframe version of the like button does not, in fact, post a link to the og:url—it still requires the href attribute¹. If you have the href attribute set correctly, after someone likes the page, Facebook will crawl it and properly import your Open Graph tags; however, the href attribute on the iframe's URL must be correct for this to work.

I spent a while banging my head against the wall wondering why the like button wasn't working before I figured this one out. The symptoms the like button shows are flashing briefly to a `1 person' like message and then immediately reverting to the 0 people `Like' button. Also note that the way to debug this is to pull up Firebug or the WebKit Inspector and watch the AJAX that goes back in forth. Look for the response to the facebook AJAX call and check out the body to see what error message you received.

¹ – I'm not sure whether or not the XFBML version of the button correctly uses the og:url tag or not, as I didn't try using it.
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212715 2010-11-03T15:54:00Z 2013-10-08T16:06:22Z GA Amendment 1: If All You Did Was Post About It, It's Your Fault

I blame myself for amendment 1 passing. What I did to fight it was post on facebook and tweet/retweet on Twitter. 90-95% of the people there *already knew about it* and *already disagreed with it*. What should have happened was a real movement to prevent it from passing, on the order of magnitude of the movements we create to elect officials we want in office.

Amendment 1 was worded in a way that introduced bias, and the movement to evade that perception was lackluster and based on exactly the types of media where people were already likely to be opposed or not to vote at all.

There was no ground game, there was no organized phone opposition. The facebook page just put out blog posts. Excellent, since I agreed enough to like the page, I must need more convincing. Maybe it was to forward by email to your friends, to whom you've probably already spoken to at length. Or maybe it was so you could repost it on facebook yourself, to reach all your friends who may not even have seen it for the noise in their feed.

Obviously, it didn't work. Current numbers have Amendment 1 passing by 67.5%, which in any election would be a landslide. I suspect the 32.5% of people who voted against Amendment 1 were the members of the facebook page, and the voters who are actually on Twitter.

So let this be a lesson to us all. Next time there's an amendment that we are as vehemently opposed to as we seem to have been to this one, let's actually do what you do when you want a vote to go your way, instead of messing around like college kids trying to tell their friends what their date was like or hackers trying to tell everyone about their favorite new programming language.

]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212716 2010-11-02T20:07:30Z 2013-10-08T16:06:22Z Pet projects, Node.js, and Scala My post yesterday implied that all I'd be blogging about would be OpenStudy and what we're doing. This isn't entirely true, as I'll be trying to blog about the cool technology stuff I get around to playing with in my spare time. Next on the radar is definitely the ever-increasing hotness of Node.js -- if only out of curiosity. If nothing else, it feels like a nice homecoming back to some of the Ruby world's approaches to things, which is rather refreshing.

The one thing I've gotten tired of when using Scala is the compile times. It's entirely possible that as developers we ultimately gain time by having a compiler that will help us avoid type errors, and Scala's type system is beautiful and extremely powerful. For comprehensions are some of the nicest syntactic sugar I've seen in a while, and the layers that Lift's Box class adds on top of them are even nicer.

That said, I can't help but feel like the compile times slow me down. Even though this may be all perception, and I may actually develop effectively faster, the mind gets discouraged when it thinks it's getting less done, not just when it's actually getting less done. In addition, the pain of recompiling every time a change happens makes even continuous iterative testing feel like a slowdown. No longer can you just code and wait one second for your test to run -- you have to change gears after waiting for your code, wait for the compile, and then see the results. This seriously cramps the iterative test-driven process, since adding a test, waiting for it to fail, and then adding the code to make it pass becomes a much longer cycle.

The Node.js experiment is to find out about some of the various apparent positives it offers, including the ability to use Javascript on both client and server. There are a lot of goodies in Lift and in Scala that are enticing, however, so it'll be a tough fight.
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212717 2010-11-01T20:52:56Z 2013-10-08T16:06:22Z Time for some new life It's been an incredibly busy few months since I last posted, but it's time I got back into the swing of things. As the Chief Software Engineer at OpenStudy, I've been hacking day in and day out to get OpenStudy closer and closer to our vision for the future of collaborative studying online. One thing we've been a bit short on, unfortunately, is blog posts explaining what we're up to behind the scenes.

Recently, our designer and user experience architect, Siddharth Gupta, started blogging it up at http://siddharthgupta.net/, posting cool tidbits of upcoming designs and ideas that improve his workflow day-to-day. This blog, then, is meant to be the face of the developer side of OpenStudy, with summaries of new features when we deploy new stuff, as well as posts about what's coming up and teasers regarding what we're working on. I'm hoping this will get people a little more interested in and excited about the stuff that's coming and how we're improving OpenStudy every day to make it better.

In certain cases, blog posts may appear both here and on the main OpenStudy blog at http://blog.openstudy.com/ . I've got some posts cooking quietly that should be coming out in the next couple of days, so come back soon!]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212718 2010-03-08T23:11:09Z 2013-10-08T16:06:22Z Untitled

via tweetie
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212719 2009-12-15T16:54:13Z 2013-10-08T16:06:22Z The modern newspaper pitch.

via tweetie
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212720 2009-12-03T07:16:52Z 2013-10-08T16:06:22Z Flash FileReference on Snow Leopard Bug Yesterday I spent most of the day (ok, all of the day -- including sleep, total time on the bug was 24 hours) debugging an issue where parts of our app would stop responding to mouse move, over, and out events. After a lot of investigation assuming it was in the code, this article was discovered:

http://www.opencoder.co.uk/2009/09/bug-in-flash-player-filereference-browse-a...

Basically on Snow Leopard, in a 32-bit browser (most commonly, Firefox), the Flash player's FileReference upload dialog will cause the Flash Player to not respond to mouse moves or hovers until you (most commonly) switch to another application and then back to the browser.

Throwing this out there to hopefully give the above post some more Google juice and such, and for the future reference of myself and others.]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212464 2009-11-03T20:01:00Z 2013-10-08T16:06:19Z FREAKING FINALLY!!! #fb

via tweetie
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212465 2009-10-11T20:08:29Z 2013-10-08T16:06:19Z Yeah... Those kinds of foods are related... #fb

via tweetie
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212466 2009-09-18T23:14:03Z 2013-10-08T16:06:19Z OJUtils: A set of utilities for Objective-J In developing Strophe.j and ojspec (more coming on that soon), I ran into the need for a couple of utility classes. To collect these utility classes, I've created a github repository named OJUtils. Currently the two classes in there are BlankSlate and DelegateProxy.

BlankSlate

BlankSlate is an equivalent to the BlankSlate class in Ruby: it provides the absolute minimum necessary for an Objective-J class to function correctly. This makes it ideal for use in proxying scenarios when you want as few methods interfering with proxying as possible. I use it in ojspec to set up a mocking framework. BlankSlate has 3 class methods and 5 instance methods, and nothing else. These are:

  • + alloc -- A basic alloc method that correctly sets up a bare bones object.
  • + initialize -- A blank implementation that is necessary for correct functioning in the Objective-J world.
  • + load -- Likewise a blank implementation necessary for correct functionality in Objective-J.
  • - description -- A barebones implementation that displays the class name with the hash.
  • - methodSignatureForSelector: -- An implementation that always returns null (thus never calling forwardInvocation:)
  • - forward:: -- A full implementation taken basically verbatim from CPObject. Calls forwardInvocation: when an unknown method is called but has a non-null methodSignatureForSelector:.
  • - forwardInvocation: -- Invokes doesNotRecognizeSelector: immediately.
  • - doesNotRecognizeSelector: -- Raises a CPInvalidArgumentException immediately.

No other methods are defined, so a proxy class can easily use forwardInvocation: to proxy almost all method calls successfully.

Something like BlankSlate is likely to go into Cappuccino itself soon (I'm hoping to have time to move this over there) -- possibly based on NSProxy concepts.

DelegateProxy

There is an oft-seen pattern in the Cappuccino codebase when invoking something on a delegate:

It seemed annoying to do this every time, and thus during the creation of Strophe.j was born DelegateProxy. DelegateProxy acts as a simple wrapper around your delegate on which you can invoke any method you want with the knowledge that it will only be forwarded to the delegate if the delegate can handle it. If it can't, then the method call will be silently ignored without causing any errors. This means that code like the above turns into a simple:

DelegateProxy itself is a super-simple class, and relies on BlankSlate to provide a very basic starting point. All you need to do is ensure that you keep a reference to a DelegateProxy wrapped around your delegate instead of keeping your delegate itself.

Creating a new DelegateProxy is as simple as:

And that's it, you're ready to just do method invocations without worrying about respondsToSelector checks!
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212467 2009-08-14T19:29:09Z 2013-10-08T16:06:19Z Strophe.j -- An Objective-J Wrapper for strophe.js As I evaluated Cappuccino as our potential next step to move parts of our UI from Flex (the parts that are not necessarily well-suited to Flex's strengths), I was in charge of assembling some quick prototype code to show how Cappuccino would work for us. Since a part of our app is XMPP communications and publish-subscribe, I started writing a simple wrapper to the strophe.js Javascript Jabber library. It implements Jabber using BOSH, which uses long-lived (usually 60-second) AJAX requests to maintain a Jabber connection, instead of an always-on TCP connection to a port (which browsers currently cannot do).

 Strophe.j is this wrapper, and it's available on github at http://github.com/Shadowfiend/Strophe.j/tree/master . It provides a very basic implementation of the current user, a connection, and basic MUC support. It has some stubs for handling rosters, though they aren't implemented, and it pulls in jquery (for parsing the Jabber XML when needed). It's in the early stages, and my work on it is likely to be sporadic, since our ultimate decision was to go with GWT... But I kind of fell in love with Objective-J, so I won't be going anywhere anytime soon.]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212468 2009-07-02T20:20:04Z 2013-10-08T16:06:19Z Havoc-Wreaking Flex Transitions Flex transitions are damn sexy beasts. They let you animate things like movements and resizing. The trouble is that they do all of this asynchronously, firing every so many milliseconds. The result is that occasionally, a Flex transition will get in the way of a change you make. This isn't a very long post, more of a warning. If for some reason you try to (for example) resize or move an object and find that it doesn't work, make sure that you don't have a transition overriding your size settings somewhere. Occasionally, these can fire at just the wrong moment so that you end up with exactly the wrong result, but without your really being able to see the transition happening. This happens especially if the transition is supposed to happen over a very short period of time.]]> Antonio Salazar Cardozo tag:shadowfiend.posthaven.com,2013:Post/212469 2009-06-07T22:51:48Z 2013-10-08T16:06:19Z Untitled

via tweetie
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212470 2009-04-30T23:07:00Z 2013-10-08T16:06:19Z Flexlib's SuperTabNavigator and Truncated Labels

A quick `uh-oh, bug fix!' post here. We've been using the SuperTabNavigator from flexlib for some of our tabbing needs, and recently realized we had run into a bug: if your navigator is set to limit the width of tabs to a certain amount, thus causing labels to get truncated, then you will have trouble editing the labels on those tabs.


The SuperTabNavgiator provides, amongst other features, the ability to edit a tab label in line. This is pretty cool, and the way it is implemented is even cooler: basically the same textField that is used by the Tab to display the actual label then has its type property set to TextFieldType.INPUT to make it editable. Pretty slick.

The troubles come when the label is truncated. By default, what tabs do when they have to truncate their labels is keep their label property the same, set their tooltip to their full label text, and then set their textField's text to a truncated version of the label with a `...' on the end. This works out nicely, as it means that you have a truncated label for width purposes, and can get the full label by looking at the tab's tooltip. But! Since SuperTabNavigator uses that same text field for editing, when you go to edit the text, it's the truncated version! That is to say, if I had a label title that was `This is a cool tab', that was truncated to `This is a coo...', when I went to edit it, I would be editing the text `This is a coo...', not my real label text!

This is a pretty huge problem, but it's also relatively easy to work around. When the SuperTab is switched into edit mode, we basically set its textField's text back to the value of the tab's label (remember that it is never truncated, only the text in the field is). When the text is updated, the field will get truncated again by the Tab class's own handling of the label property. Thus, we have the right editing functionality and the right viewing functionality.

Implementing this fix is a bit complicated, however. Basically, you need to implement it in a subclass of SuperTab. That's easy enough. The trouble comes with the fact that we need the SuperTabBar to use that subclass of SuperTab. Ok, so we create a subclass of SuperTabBar that instantiates FixedSuperTabs instead of SuperTabs. Now the problem is getting the SuperTabNavigator to use the FixedSuperTabBar instead of the SuperTabBar. So we need a trivial subclass of that. The solution is a set of three classes: FixedSuperTabNavigator, FixedSuperTabBar, and FixedSuperTab, with the real work done in the last one. We'll have a quick look at the code for the fix in FixedSuperTab:


We override the setter for editableLabel (which is where we are switched into editing mode). We let SuperTab do its thing, and then, if we need to, we do our own overriding of the textField's text. Then we also redo the selection of the text -- SuperTab selects the full text when the tab is made editable, but we've just changed the text in the field, so the selection will at best be off and at worst non-existent. Thus, we redo it so that it is correct.

Here is a ZIP file with the three necessary files for a fixed editing experience. I need to submit a patch for this to the flexlib folks, but this will do as a stopgap.

UPDATE: I went ahead and posted a patch to the appropriate issue at http://code.google.com/p/flexlib/issues/detail?id=82

]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212471 2009-04-28T20:50:00Z 2016-11-21T12:58:23Z Using Glows to Change Image Color in Flex

In a few cases in our application, we've been using buttons that consist primarily of an image that must then change colors for hover and down/selected states. For each of these, we initially embedded the images for the three states into the application and referred to them from there. The trouble is that this means a bit of bloat in the app itself. Not much for small buttons, more for larger buttons, but regardless if the space is unnecessary, then there's no need to have it there. Flex applications can get relatively large (the main module of our app is around 700K at the moment), and it's always nice to save some space. With that in mind, how can we include fewer assets in these cases?

In our case, we decided to use a Glow effect. The Glow effect provides a certain color tint to a given component. By default, it's an external glow, meaning there is some color emanating from the borders of that component. However, there are a few properties of the effect that we can use to our great advantage:
  • The inner property can be used to switch the glow from an effect that emanates from the borders outwards to one that emanates inwards and over the component.
  • The blurX and blurY properties can be used, at least on relatively small components, to make the glow fill the entire image with the color. Usually the glow color fades, but if these two properties are big enough then the color spreads into the entire image at the same intensity.

A downside of the filter is that, in order to actually modify the color it is glowing with, you have to remove it from the list of a component's filters and then add it back in. Simply changing the color property is not enough.

For all the discussion about rollovers above, the biggest win of using a glow to change colors is the ability to change an image with a color transition. In our case, we have a thumbs up image that changes colors when a user is rated up. The image has a color transition from green to white and then back. Usually, this would be relatively difficult to achieve; with the glow effect, however, we can change the color of the image without needing any additional embedded assets.

Here is and adaptation of that code:

<?xml version="1.0" encoding="utf-8"?>
<mx:Canvas xmlns:mx="http://www.adobe.com/2006/mxml"
           xmlns:effects="com.darronschall.effects.*"
           creationComplete="init()">
    <mx:Script>
        <![CDATA[
            import mx.events.EffectEvent;

            [Embed('assets/images/thumbs_up.png')]
            private static const HELPFUL_THUMBS_UP:Class;
            private static const THUMBS_UP_DEFAULT_COLOR:Number = 0xC3DA6E;

            [Bindable]
            public var helpfulImageColor:Number = THUMBS_UP_DEFAULT_COLOR;

            public function highlight(newColor:Number = NaN):void
            {
                if (! isNaN(newColor))
                {
                    helpfulImageColor = newColor;
                    reloadHelpfulImageGlow();
                    ratingLabel.setStyle("color", newColor);
                }
            }

            public function fadeHighlight():void
            {
                highlightFader.play();
            }

            /**
             * In order for a glow to have its color changed, it needs to be
             * removed and re-applied. This function does that for the helpful
             * image glow filter.
             */
            private function reloadHelpfulImageGlow():void
            {
                helpfulImage.filters = null;
                helpfulImage.filters = [helpfulImageGlow];
            }
        ]]>
    </mx:Script>
    <mx:Parallel id="highlightFader" duration="2000">
        <effects:AnimateColor target="{ratingLabel}"
                              property="color" isStyle="true"
                              toValue="{THUMBS_UP_DEFAULT_COLOR}" />
        <effects:AnimateColor target="{this}"
                              property="helpfulImageColor"
                              toValue="{THUMBS_UP_DEFAULT_COLOR}"
                              tweenUpdate="reloadHelpfulImageGlow();" />
    </mx:Parallel>

    <mx:GlowFilter id="helpfulImageGlow"
                   color="{helpfulImageColor}"
                   blurX="20" blurY="20"
                   alpha="1"
                   inner="true" />

    <mx:VBox styleName="ratingWrapper" height="24">
        <mx:Label id="ratingLabel" styleName="ratingLabel"
                  width="100%" height="8"
                  text="{data.rating}" />
        <mx:Image id="helpfulImage" styleName="helpfulImage"
                  height="10"
                  scaleContent="true"
                  source="{HELPFUL_THUMBS_UP}"
                  filters="{[helpfulImageGlow]}" />
    </mx:VBox>
</mx:Canvas>

Here, we given the color animation a tweenUpdate function that reloads the glow filter on the image. As I mentioned above, this happens because the filter is not reapplied unless you remove it and re-add it. Simply changing the color property of the filter, as the AnimateColor effect does, does not force that refresh. Other than that most of it is pretty self-explanatory. We have a bindable color that is updated continuously by the AnimateColor effect, and we have a parallel effect that updates the label that accompanies it. The component that contains this in our application changes its own background color at the same time, thus making this color change still result in a legible rating.

All in all, glow filters are more useful than simply applying a glow. This technique can be a real keeper for color transitions on images.
]]>
Antonio Salazar Cardozo
tag:shadowfiend.posthaven.com,2013:Post/212472 2009-04-24T18:58:00Z 2013-10-08T16:06:19Z Pulling and merging with git

When you have a multi-user workflow with git, you usually have a central repository that you clone, and then you periodically push your changes back to it and pull the changes others have made down from it. On the surface, this seems sort of like a subversion workflow, only you can also commit locally; in practice, however, it is very different. Each clone of the repository is essentially a potentially different branch of changes. You and Mike (a fictional über-coder) may both have the same version from the central repository, and then you make eight commits and he makes 29. What, then, does git do?
 
In essence, every commit you make in git can have multiple `children'. A given commit is, then, a tree. A lot of times, that tree degenerates to a list: if I commit six times in a row, each new commit is a child of the previous commit, and they happen linearly. This forms a list. However, in the case we mentioned above, the last commit that you and Mike synchronized on from the central repository has two children -- one is your first commit, and one is Mike's first commit. Then you commit linearly, in parallel to each other. How you merge these is the important part.
 
At some point, Mike pushes his changes. He does:
 

# git push
Counting objects: 104, done.
Compressing objects: 100% (73/73), done.
Writing objects: 100% (73/73), 7.13 KiB, done.
Total 73 (delta 58), reused 0 (delta 0)
To git@github.com:magic/mikeFreakingRox0rz.git
  bc06779..d79eb17 master -> master


 
Because Mike pushed his repository to the server before anyone else, what ends up on the server is his version of events: a linear list of commits starting from the last synchronization point. Until anyone else wants to work with this, that's fine. Then, you try to push:
 

# git push
To git@github.com:magic/mikeFreakingRox0rz.git
 ! [rejected]    master -> master (non-fast forward)
error: failed to push some refs to 'git@github.com:magic/mikeFreakingRox0rz.git'


 
Uh-oh. What happened? Well, a push is basically copying your repository back to the server. But if you were to do that now, you would overwrite all of Mike's changes. Since your local set of commits and the remote set of commits have now diverged, git doesn't let you push. The key difference between what you did and what Mike did is that when Mike pushed, the remote server had a subset of his changes, whereas when you pushed the remote server had changes that you have never seen. All in all, this is similar to when you try to commit your changes to an SVN repository when someone else has made commits before you: in order to avoid incoherency in the differences that are stored, you can't do this.
 
In SVN, the next step is to update. This updates the changes, and, if you are fortunate, there are no conflicts with changes you made and you are good to go. In git, the next step is to pull:
 

# git pull
remote: Counting objects: 20, done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 11 (delta 8), reused 0 (delta 0)
Unpacking objects: 100% (11/11), done.
 From git@github.com:magic/mikeFreakingRox0rz.git
  38620e5..058cf00 master   -> origin/master
Merge made by recursive. 
 mike/is/awesome/prove.rb |  4 ++-- 
 1 files changed, 2 insertions(+), 2 deletions(-)


 
Usually you just fire this off, do your push again, and then forget about it. But it's important to understand what is going on in this step. When I pull from the remote repository and I have changes that are not in that remote repository, we need to reconcile those differences. In terms of what the commits look like, the original sync commit had two child lists. Those two lists must now be reconciled and joined into a single commit, which represents a coherent continuation point for everyone. In essence, the two branches that have diverged need to converge again before we push back to the server. Git needs a single head commit -- essentially the commit at the end of your trees -- to work correctly.
 
Anytime you pull from a repository and it has commits that you don't have, those commits are added to your local repository, and then a single merge commit is created that merges the last commit from the remote repository with the last commit from your local repository. In the above transcript from git, that happened automatically:
 

Merge made by recursive.


 
This means git went ahead and merged the two and didn't find any conflicts it couldn't resolve. In this case, git also automatically created the merge commit for you. If you do a git log afterwards, you will see it:
 

# git log commit eb8442977176a95568e27b40c169e2d97ab4e8f7
Merge: 0f6ef2d... 058cf00... 
Author: Antonio Salazar Cardozo  
Date:  Wed Apr 22 16:55:41 2009 -0400

   Merge branch 'master' of git@github.com:magic/mikeFreakingRox0rz.git


 
At this point, your repository is ready to push back, and you can do a git push as above and this time it will work.
 
This is the perfect situation above. There are two important things that can go wrong when you pull. A pull in git is a combination of a fetch command (which pulls the remote repository in its current form into a local copy of that repository) and a merge command (which merges that local copy of the remote repository into your local working repository). Generally, the fetch works (unless you have a connection issue), and the merge occasionally fails. For example:
 

# git pull
remote: Counting objects: 48, done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 26 (delta 20), reused 0 (delta 0)
Unpacking objects: 100% (26/26), done.
 From git@github.com:magic/mikeFreakingRox0rz.git
  38620e5..eb84429 master   -> origin/master
Updating 38620e5..eb84429
mike/is/awesome/prove.rb: needs update
error: Entry 'mike/is/awesome/prove.rb' not uptodate. Cannot merge.


 
This happens when you have made changes locally that haven't yet been committed. Git merges committed files happily, but it doesn't even try to merge changes that are not yet committed into the repository. The solution here is to just commit:
 

# git commit -m "He is awesomer." mike/is/awesome/prove.rb


 
Once that's done, you can do one of two things. You can run a pull again, but that will redo the fetch, and you've already fetched the remote repository. The alternative is faster, which is to just redo the merge part of the pull:
 

# git merge origin/master


 
This merges from the master branch of the origin remote, which is the remote repository and branch we've been working with. At this point, the merge will usually succeed. However, it may fail yet again, with an error that looks more like:
 

Auto-merged mike/is/awesome/prove.rb 
CONFLICT (content): Merge conflict in mike/is/awesome/prove.rb


 
This may happen for multiple files. At this point, you need to resolve the merge error. The repository is left in a very unsteady state. The only way to really move forward is to resolve the conflict and then commit. The commit will be the merge commit we saw in the log above, and will include any automatic merging that git has done successfully. The easiest way to perform the merge is:
 

# git mergetool
merge tool candidates: kdiff3 tkdiff xxdiff meld gvimdiff opendiff emerge vimdiff 
Merging the files: mike/is/awesome/prove.rb 

Normal merge conflict for 'mike/is/awesome/prove.rb': 
  {local}: modified
  {remote}: modified 
Hit return to start merge resolution tool (opendiff):


 
git mergetool essentially walks you through each file with conflicts and offers you the choice of which program you want to resolve the conflict with. It detects what programs are available and makes an intelligent selection based on that. On Mac, it tends to be opendiff. Once you open the file, you can resolve the conflict and then save it. mergetool may prompt you to verify that you've resolved the conflict (that usually only happens if you leave everything unchanged). Then, when you've done all the files, you are returned to the command prompt. At that point, run git status, double-check that everything looks good, add anything you want to commit if you haven't done so yet, and then commit. At that point, your merge commit is done and you can push again.
 
One last thing worth mentioning is an alternative to committing your changes when you get the ``not uptodate'' error. That error means that you have uncommitted changes in your working tree on a file that git is trying to merge. You can commit them, but you may not be done with those changes yet. In these cases, you can use git stash. If you just run git stash, it will stash all changes to your working tree away in a secret place, and leave your working tree clean for a merge. Then, when you've done the merge, you can do git stash apply, which will apply the latest stashed changes to your working tree. This is a good way to pause your work, merge, then continue your work.

]]>
Antonio Salazar Cardozo