CDI talk at Houston JUG Wednesday (3/31)

I'm continuing my tour of Texas JUGs this Wednesday at the Houston Java Users Group.  I'll be speaking on CDI (JSR299), which is the core of Java EE 6.  My goal is to give a good introduction to the core concepts of CDI and to explain the motivation behind the development of these new technologies.  My talk at AustinJUG talk went last month went really well, especially the Q&A, so bring your questions and hopefully we can have some good discussions.  It's been a few years since I've done an HJUG, so I'm really looking forward to seeing everyone.  Don't forget to register if you are coming. 

Maybe I've been wrong about Javascript

For quite some time, I've steadfastly held the position that, in the context of web application development, if you were writing HTML or Javascript by hand you were doing something wrong.  HTML and Javascript are the bytecode of web development and should only be produced by tools or by libraries.  Web application developers should be working at a higher level.  As a result, I've been primarily interested in technologies like JavaServer Faces that revolve around higher level component models that abstract away the gory details of HTML, Javascript and even HTTP itself.   

While I haven't exactly changed my tune, recent experience suggest that I may have been too critical of developing apps in Javascript.   We've been developing our latest product at Socialware using Ext, and I have to say that I am quite impressed.   Ext is a Javascript framework that gives you a full featured component model like you would get with something like JSF.  However, that component model is implemented on the client side instead of the server side.  What you do in Javascript really is more like the high-level server app code than the low-level client side code I've always associated with client-side HTML+Javascript development.   

With frameworks like Ext available, the choice of doing a full application RIA-style on the client becomes not just reasonable but quite a compelling option.  This style of development does push you to a more service-oriented architecture, which could increase app complexity a bit if you hadn't planned to go that route.   However, more and more applications are going the route of services to support non-web (flash, iPhone, desktop clients, etc...) front ends, so I for many apps a service layer is (or will be) unavoidable.   Although in some cases it might be more work, it seems that the flexibility it buys is well-worth it.  

I'm going to reserve judgement until I have a little more experience, but so far things have been looking really good.  I expect that the next year will find me writing more and more and of my web applications code in the browser and less and less of it on the server.

Amazon Simple Queue Service

I'm absolutely loving working with Amazon Web Services at the new job.  Almost all of our dev infrastructure is running on EC2, as are all of our production and test deployments.  With the exception of our personal machines, we're pretty much entirely in the clouds.

For the last few weeks, I've been working on our newest product.  My part of the system involved managing a large number of asynchronous backend processing tasks.  Messaging seemed like the right way to farm these tasks out to processing nodes, and so I was faced with my first real decision.  Should I go with a traditional messaging system or should we try out something like Simple Queue Service?

After my years at JBoss, I'm very comfortable with JMS.  I've worked with enough customers on messaging issues to know how to build a robust, scalable messaging infrastructure.  I'm really impressed with what the guys on the HornetQ team are doing, and I was pretty sure I would choose HornetQ if decided to deploy our own messaging system.

However, when I thought about the needs of our system, I realized that I wasn't really looking for a high performance message bus.  And, although I hear HornetQ clustering is pretty straight forward, I wasn't really interesting in deploying and managing a cluster of messaging servers.  What I really wanted is exactly what Amazon SQS provides.

SQS requires no setup or management and is almost infinitely scalable.  It doesn't offer fast throughput, but it's very robust and dependable.  Although I haven't seen any service outages, the kind of system I'm working on doesn't even require high availability.  I just need to know that I can reliably push out messages that will eventually be processed, and I need to know that if the amount of work increases that I can scale up the number of processing nodes to receive those messages.

That's what SQS gives.  In fact, it's even a little better than I originally thought.  I thought that at first I might need to monitor the messaging queue and manually spin up and shut down instances based on our load.  Of course, I would plan to write some sort of management system to perform that task, probably sooner rather than later.  But it turns out that with Amazon it's pretty easy to monitor queue levels and automatically start up and shut down worker nodes, regulating queue levels like a thermostat.  

I'm seriously sold on this.   I have reliable messaging system with almost perfect scalability and adaptability.  I was able to focus on my domain task rather than worrying about my infrastructure, and in the end I produced my entire system in less time than I would have normally allocated just to setting up the infrastructure I needed.  

I should say that just because SQS met my needs, it doesn't mean that you can necessarily throw away your messaging infrastructure and move to Amazon too.  If my requirements for throughput were different or if the nature of my tasks were different, SQS might not have been a good fit.  If I had more complex message selection criteria or needed more control of delivery of messages, SQS would not be a good fit.  The SQS interface simply is not as flexible as other messaging services.  But, if it does fit your task, I highly recommend giving it a try.  I'm very pleased with the service so far.

The worst interview question ever

I left Red Hat for my current job at Socialware last month.  I had been thinking about leaving for some time, but the whole process happened quite quickly.  It pretty much happened entirely over a week of forced vacation at the end of December.  (something I still consider to be corporate theft of vacation time, but we'll leave that for another post)  I knew pretty quickly that that company was a good fit. It was an early stage startup with good people and some good technical challenges.  T

I've talked about this before, but one thing I haven't told many people is that earlier that month I had talked with another company.  I won't name names, but it's a large company that I discovered was starting to offer cloud services.  I've been very eager to start working with amazon-style cloud services, and this company was a provider of those services.  That would be a perfect fit for my JBoss background, so I tried to make contact with this company.  I was very worried about the size of the company, especially since one of my complaints about Red Hat was how big the company is.  

I made contact with a technical person in the cloud services group, and I was excited about talking to him about what they were doing to see if they might be a good fit for me.  Before the we were scheduled to talk, I got a call from an HR guy at the company.  I was hoping to avoid HR, and any company that would have a hiring process that involved an HR guy as first contact was probably bigger than I really wanted to deal with. However, their technology is interesting enough that I decided to give it a go anyway.  

When he calls, I explain a bit about my background, what I'm looking for and why I specifically contacting the group in question.  Everything was going well, and I thought he should be able to tell that I was a serious candidate who really should be talking to the technical person that I was already scheduled to talk to.  So I was surprised that out of the blue he asked: 

So, tell me.  Have you ever worked with any design patterns?

Uh, yeah....  I think I may have heard of design patterns before.  In fact, I'm sure that I had mentioned at least a few in talking about some of the things I was working on.  I answered politely in the affirmative, barely able to contain my surprise at the question.  The next question was perhaps the worst interview question I've ever been asked before.

Can you name a few of the design patterns you've used?

Seriously?  You want me to read you the table of contents of GoF to you? Really? I was speechless. I was literally unable to find the words to respond, a real "There's no emoticon for what I'm feeling" moment. I'm pretty sure I set my phone on the table and just stared blankly for a minute.  

I try not to be a prima donna about these things, but it's kind of insulting to ask.  Maybe you could ask a regurgitation question like that to an intern candidate who really might have no practical experience just to see where they are, but beyond that it's really a waste of everyone's time.  Any company who would put up that kind of nonsense as a front line HR screen is just not the kind of company I'm interested in working with.  

If you really feel the need to screen for the basics, at least have the courtesy to ask it in the context of actual work and experience.  There are plenty of ways to ask a basic design patterns question in the context of an interviewee's experience if you are actually paying attention.  And if the HR guy isn't technical enough to carry on a low-level screen then perhaps you should assign a technical person to do the job?  

I have no problem explaining what I've done or demonstrating that I'm not just full of hot air.  For example, at Socialware we did some white board coding and softball Java problems that required only a modest amount of non-book experience.  But those were more jumping off points for discussion and a test of communication skills than a textbook screen to see if I could recite a list of things or get the specific answer. I had fun talking to the guys, and I felt they were people that I could work productively with.

Maybe I'm making too much of the thoughtless question, but I really think companies should think a bit about how the questions they ask represent their company.  If you can't be bothered to engage a candidate in a meaningful discussion, then why will that candidate want to work for you?  My perspective is that at a job interview, the candidate should be carefully interviewing the company at the same time.  In the case of the un-named large company, they failed my screen.  

 

Can computer viruses evolve?

The latest episode of This Week in Virology posed an interesting question.  A listener question (at about 20 minutes before the end) prompted the question of whether or not the similarities between biological viruses and computer viruses was close enough that it would be possible to create a truly evolving computer virus and if so whether it would develop in similar ways to real viruses.

I'm no virologist, but I did happen to do some research in neuro-evolution while I was a student at UT.  I'm obviously no expert, but it's an interesting question that I have spend some time thinking about.   The short answer is that the evolution of real computer programs doesn't really work.  In my opinion, the problem is that of representation.  Software instructions are very brittle.  They cary out a specific set of instructions, and those instructions are so densely coded that almost all mutations or recombinations would yield completely non-functional programs.  It's not even a question of reduced fitness - they simply wouldn't work.  When all evolutionary paths result in death, evolution just doesn't have much to work with.

Although it's tempting to want to draw comparisons between DNA and software, I think it's clear that DNA is not software.  It is information, but it's not computational.  When a ribosome translates RNA, it isn't evaluating a fragile mathematical computation.    It is chaining together amino acids to create proteins, creating tiny biomachines that perform some function.  Changes in that representation can potentially produce new proteins which could perform new functions.  I'd guess most changes only result in a small change in fitness of the entity, though some changes could be deadly.   Either way, there's enough fuzziness in the system that evolution has some room to play and produce improvements.  

That's not to say that computational evolution is impossible.  It is done, however the trick is to find something other than software to evolve.  In my university research, I evolved neural networks.  A neural network is a system that takes inputs, passes them across a number of nodes that loosely mimic biological neural networks, and produces outputs.  The representations of neural networks can look a lot more like DNA than normal software.  A small change in a neural network won't necessarily kill the network, and it could conceivably improve it.   In my research, I showed that you could apply neuro-evolution to complex game playing tasks and the system could naturally evolve successful playing strategies, apparently only limited by the expressiveness of whatever the underlying neural network system is.  

In this way, the similarities to biological systems are quite striking.  If DNA is the representation of proteins, then evolution of DNA should be capable of  producing any sort of imaginable protein machine.  If some combination of proteins could, for example, produce the kind of powers you see on Heroes, then I'd think we would almost surely see those capabilities produced.  Since we don't observe that, then the most likely explanation is that our DNA-based architecture simply isn't capable enough.  It's possible evolution just needs more time, but my personal belief is that we've largely hit the limits of our current architecture.  

Getting back to software, neural networks are not the only computational devices that can be evolved.  To engage in a bit of circular logic, we can evolve pretty much anything that has a representation that is amenable to evolution.   That is to say, we need some sort of representation of the system that can be acted on by change operators (mutations and recombinations of some sort) and that is resilient in the face of those changes.  

Real computer software doesn't fit that description, so the idea of setting computer viruses loose on a closed network and applying evolutionary pressure really wouldn't work.   We can (and do) create evolving systems in software and study them, but so far I've yet to see anything truly amazing produced.  My feeling is this is due to the limitations of the architectures being evolved and not due to the limitations of evolutionary techniques.   

 

CDI talk at AustinJUG next Tuesday (2/23)

I'll be kicking off a series of JUG talks on CDI (JSR 299) a bit sooner than I thought.  An opening came up for this months Austin Java Users Group, and I jumped on it.  I didn't expect to get to AustinJUG for a few months, but now I get to test the new talk in front of a friendly (I hope) local crowd. Here's are the talk details, from the AustinJUG announcement. Hope to see you there!

Main Course

Title: CDI: Contexts and Dependency Injection for the Java EE
Platform
Speaker: Norman Richards

Presentation Abstract

Java Contexts and Dependency Injection (CDI) is the new dependency management system introduced in JSR 299. Formerly known as Web Beans, CDI brings the pioneering work done in frameworks such as Seam and Guice into mainstream of standards-based Java development.  CDI is included in the Java EE 6 platform and serves as the unifying component management technology across the entire EE platform. In this talk, I'll introduce the basic concepts of CDI and explain how you can get started using Weld, the CDI reference implementation.

About Norman Richards

Norman Richards is a developer at Socialware, The Social Middleware Company. He is a contributor to the Seam and Weld projects and author of the DZone refcard for CDI. He is also the author of several popular Java books such as JBoss: A Developer's Notebook and XDoclet in Action. Norman can be contacted through his personal website at http://nostacktrace.com/.

Meeting Location & Time

Meeting at the Commons Lil' Tex Auditorium off North Mopac from 7-9 PM

http://www.utexas.edu/commons/maps/
http://www.utexas.edu/commons/rooms/commonslayout.html

A strange thing happened on the way to Netbeans

I've never liked Eclipse.  That's what I told myself at least.  At JBoss, I used Eclipse because everyone else did.  I always secretly wanted to use Netbeans, but I never had the time to invest in learning to use it properly.  As I always just barely had eclipse working for me, the prospect of switching was just too daunting.

At the new job, both Eclipse and Netbeans are in use, so getting going with Netbeans was a very safe choice.  Finally I could be rid of that nasty Eclipse!  I spent my first couple weeks using it, and I was quite happy.  Netbeans has a pleasant feel to it.  It looks nice, and everything is intuitive and well-designed.   The maven integration is flawless, and things just seemed to work.

Unfortunately, the more I used Netbeans, the more I found that it just couldn't keep up with Eclipse in terms of refactoring and quick fixes.  No matter how well-designed Netbeans is, the Eclipse code editor actually is better able to keep up with the way I like to code better.  In Netbeans I was digging through menus to find refactorings that are at my fingertips in Eclipse, and the variety of quick fixes in eclipse can't be matched.   For example, if you add a parameter to a method call in Eclipse, you have the option to either alter the method signature of the called function or create a new method.  Netbeans only offers to create a new method.  While that may seem like a small thing, those are exactly the kinds of things make me want to use an IDE.  I can build and run from the command line fine.  I'm perfectly comfortable accessing my repository without an IDE, and emacs with it's programability and macros has yet to meet it's match in any IDE.   The selling point for me is being able to operate on the code at a higher level.  And, as good as Netbeans is, I was shocked to find that Eclipse actually works better for those things that are most important to me.

That was a really shocking revelation to me.  I'd much prefer to prefer Netbeans.  It has a sense of style and intuitiveness that really appeals to me.  After spending all this time hating on Eclipse, I have to admit that it does a good  And somehow it's the tool that actually works best for me right now.  

Apache, SSL and name-based virtual hosts

Apparently, the apache guys aren't too fond of this, so the documentation is rather poor.   Since it took a while to piece together from information on the web, I thought I'd preserve my effort.  The set up is that last week I needed to create a set of related servers on the net, specifically on an amazon EC2 instance.  Only 1 IP was available and so the options were:

  • Run each server on a separate port
  • Proxy each server under one main server URL (ex: http://nostacktrace.com/foo, http://nostacktrace.com/bar, etc..)
  • Proxy each server with a name-based virtual host (ex: http://foo.nostacktrace.com, http://bar.nostacktrace.com)

The name-based solution was by far the easiest to configure the individual back-end servers for.  It definitely was not clear that everything would cope will with being at "/foo" instead of "/".  Apache handles name-based virtual hosts fine.  Here's a basic example of how it could be configured so that foo.nostacktrace.com will be proxied to the server on port 8081 and bar.nostacktrace.com will be proxied to the server on port 8082. 

This is great, but I really needed all access to all sites to be SSL.  The problem is that we can't just directly switch to port 443.  Why?  Name-based virtual hosts require the host to be selected by an HTTP header, an HTTP header that won't be known until after SSL has been negotiated.  And, the server can't negotiate SSL with a client before it knows what server (and thus what SSL certificate) to use.  

The solution is to use a wildcard SSL certificate.  Ignoring the debate as to whether or not wildcards are a good or a bad thing, they are highly useful.  The idea is that instead of getting an SSL certificate for "www.nostacktrace.com", I can get one for "*.nostacktrace.com".  The web server can offer that certificate to any SSL request and then route the request to the correct name-based virtual host after the SSL connection is established.  As long as the virtual hosts all match the wildcard domain, all is well.  Since these were private servers, I created the SSL certificate myself, a task that I should also explain in a later post.

Creating the name-based virtual host configuration was then quite easy.  The common SSL configuration can be outside the virtual hosts blocks, and per-host configuration is in the virtual host blocks as before.  Note that I've added an SSLProxyEngine on in each host to make sure apache knows internally that it is SSL.  It can make a difference in some cases.

The last step was to make sure that I completely redirected port 80 HTTP traffic to HTTPS.  The port 80 virtual host configuration then becomes quite simple.

And that's all there is to name-based SSL virtual hosts.  

 

 

What I'm hoping for from the tablet

 Macrumors posted this picture which is supposedly pretty close to what the actual Apple tablet will look like.  Actually seeing something definitely makes me quite interested, but I'm not going to be sold until I see what exactly the device will be like.  I thought I'd put together a list of what I personally am looking for.  The closer the device is, the more likely I am to be interested.

  • 3G must be optional.  I don't want another contract.  I have my phone data plan, and I have my Verizon 3G card.  That's already one data bill too many.  A third won't cut it.  Ideally the tablet will have a USB port that accepts 3G USB devices just like Mac laptops do.  (insert side rant about lack of expresscard slot on the new macbook pros)
  • The app store must be optional.  My locked down iPhone is really getting on my nerves, and a tablet that was limited to only approved software would be just plain evil.  The gold test is whether or not I can play on Full Tilt.  You'll never see an online poker site on the iPhone.  If that is true of the tablet too, it isn't open enough.
  • Java and flash should be available in some form.  While I find it constraining, I can live without Java and flash on the iPhone.  There's a lot that I can't do online because of those limitations, but the iPhone is too small to really make good use of either technology.  However, both would be fine on the tablet.  My test: can I run cgoban (java only) to access the KGS go server?  Can I play kdice?  (flash)   It's easy to be frustrated at the creators of products that are Java or flash only, but in the end not everything will be ported to the native cocoa.  I expect a tablet device to be flexible enough to deal with it.  
  • The device should offer a good reading experience.  I love my kindle, but the lack of PDF support made it difficult for me to read everything I had.  The PDF converter was just not any good.  I almost ordered a DX, but in the end I decided that early reports of a poor PDF experience and the rumors of the Apple wonder device caused me to keep my money in my pockets.  If I can access the kindle store and have a great PDF reading experience, I'm 100% on board for a tablet.   

That's what I'm looking for.  Give me a contract-free open device, and I'm sold.  Give me Java and flash apps and kill my kindle on top of that, and I'll be on board on day 1.  Next week will be quite interesting.

The DZone interview

I did a short interview with DZone to go along with the recently released CDI Refcard.  These interviews are always a bit on the short side, but I tried my best to give a good high level overview of what CDI is and what we're trying to do with it.  It's not a technical overview.  If you are looking for that, check out the refcard or the weld project site.  

First post!

I've taken the plunge.  After letting my capmac membership expire, the former glory of my bloxsom powered net presence has faded.   Rather than continue the insanity of my highly hacked perl script, I decided that the best route for me is to move to a more managed solution.  For now, I've chosen squarespace.  While it lacks some of customizability that I could enable through a programatic solution, it seems to get most of what I want, and it makes it all quite easy.   That's enough for now, and so I'm setting up camp.