Java.util.logging vs. slf4j

A bit of History

At the time the now discontinued log4j was the most commonly used logging framework for java sun decided to implement JSR 47: java.util.logging. There were a lot of discussion, but it seems not been fruitful. java.util.logging was introduced in JDK 1.4 and hasn’t changed much despite its obvious shortcomings.

The Good, The Bad and the Ugly

One of jul best selling points is the integration to J2SE 1.4. Why need another logging framework when there is one bundled with the SDK ? Unfortunately it seems the guys responsible for jul did create a similar mess like java.util.Date.

Whats good in JUL ?

  • Integration
    Everything is included in the SDK
  • Easy to get started
    Logger logger = Logger.getLogger(“de.glauche.test”);
    logger.info(“my info message”);

Whats not so good in JUL ?

  • Strange log levels: SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST
    The first seems ok, but FINE, FINER, FINEST ?
  • logger.info, logger.severe etc. only accept Strings
    This is a really bad case of inconsistent API, to log an exception for example you need to use logger.log(Level.SEVERE,”Some String”, exception); Why are there no shortcuts like logger.severe(“Some String”,exception); ?
  • no parameterized logging, which can be a severe performance penality

What is Parameterized Logging ? (and why should i care for it ?)

Imagine you have lots of debug logging statements in your code:

   logger.debug("Number " + i + " has the Value:  " + entry[i]);

Here, every time the code is reached the String is build first, which can be quite time-consuming if there are many variables which need to be converted to string and concatinated. This can affect the performance of the program quite a bit, especially if it is in some inner loop.
One common solution is to wrap the debug statement with an if-clause:

if(logger.getLevel() >= Level.DEBUG) { 
   logger.debug("Number " + i + " has the Value:  " + entry[i]);
}

Needless to say this leads to quite messy code, is unreadable and inflexible. Fortunately there is a very underused Class in java called MessageFormat, where you can use placeholders for parameters.
So, when using something like MesageFormat, you can use: format(String pattern, Object[] arguments) for a logger. The logger could decide if the log entry needs to be constructed or not. In worst case the overhead is just a function call.

SLF4j does exactly this. It uses parameterized strings as logmessages as default. So, the above log entry in slf4j would look like this:

   logger.debug("Number {} has the Value: {}",i,entry[i]);

Unfortunately SLF4j does not use javas overloading mechanism, so you can only add two Objects, or you need to create an array of them:

   logger.debug("Value {} was inserted between {} and {}.", new Object[] {newVal, below, above});

and the Ugly ?

So, whats really ugly in JUL ?

The configuration

The default configuration resides in the JDK/lib directory (!), but can be overwritten by a command line parameter. This is very bad in the J2EE enviroment where one JDK instance can contain many independed applications.
The configuration file is rather nice and straitforward, it is easy to replace parts, or set up log levels.

but wait, there is an API to configure JUL !

There is a limited API to configure the log settings, yes. But it has some drawbacks. First, it is not so easy to replace certain parts. For example when you just want to get rid of the ugly two line default output and have all info in one clean line. With the config file it is easy, but with the API you need to implement your own Handler first.
Another big drawback is that if you configure the root logger (“”), it is still classloader dependent. If some parts of your (J2EE for example) application use different classloader mechanism, the default configuration will be used.

so how does SLF4j help ?

SLF4j is, as the name implies a simple facade for logging. It does not really log anything for itself. For the real logging you can use the build in simplelogger, JUL (if you just want some nicer API for example), logback and even Log4J. On the other hand you can also redirect log-entries from other system into SLF4j, for example Java commons Logging (JCL).
This is especailly nice, as many different libraries still use JCL or log4j.
SLF4j configures the output by the logging implementation in the classpath. Switching the real logging mechanism is as easy as replacing a jar file in your classpath.
SLF4j is fast, one does not need to write if () wrappers around the debug statments, and it has nice parameterized logging.
On the bad side, you usually need at least two additional jars as dependencies in your project, the facade and the actual logger.

There’s also a short introduction how to automaticly construct logging fields in classes with the help of guice.

Posted in java | Tagged , , , | 1 Comment

Drag & Drop with Selenium

Using Selenium for web-gui tests is really nice. You can record your workflow with the excelent selenium plugin for firefox and use them to write your own test.
The only problem seems to be Drag&Drop. The senenium runner has some drag&drop methods, but they don’t seem to work in any way. But selenium offeres mouse controll where you can simulate mouse clicks not only at an absolute position, but rather relative to a dom object. So, if your dragable <div> element has ID1 and the dropzone has ID2 you could write the following in your testrunner:

   selenium.mouseDownAt("//div[@id='ID1']","10,10");
   selenium.mouseMoveAt("//div[@id='ID2']","10,10");
   selenium.mouseOver("//div[@id='ID2']");
   selenium.mouseUpAt("//div[@id='ID2']","10,10");

The important command for this is probably the mouseOver one, without that the richfaces d&d component for JSF would not work.

Posted in java, Testing | Tagged , , | 1 Comment

Logging with SLF4J and Guice

After getting angry at the java.util.logger once again i was thinking how to replace it with the SLF4J logger. Although Guice provides a very nice internal binding to java.util.logger, slf4j does offer a much nicer syntax.
The devil is in the detail, as allways … if you want your logger to be initialized with the current class you can’t simply inject the logger … But .. there is a nice tutorial on the guice wiki about injecting a log4j logger. SLF4J works the same.

First you need a new annotiation, like InjectLogger:

import static java.lang.annotation.ElementType.FIELD;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Target({FIELD}) 
@Retention(RetentionPolicy.RUNTIME) 
public @interface InjectLogger {
}

next is a TypeListener, that listenes to org.slf4j.Logger classes with the annotiation InjectLogger:

import java.lang.reflect.Field;

import org.slf4j.Logger;

import com.google.inject.TypeLiteral;
import com.google.inject.spi.TypeEncounter;
import com.google.inject.spi.TypeListener;

public class Slf4jTypeListener implements TypeListener {

	public <I> void hear(TypeLiteral<I> aTypeLiteral, TypeEncounter<I> aTypeEncounter) {
		
		for (Field field : aTypeLiteral.getRawType().getDeclaredFields()) {
			if (field.getType() == Logger.class
	            && field.isAnnotationPresent(InjectLogger.class)) {
	        	aTypeEncounter.register(new Slf4jMembersInjector<I>(field));
	        }
	      }
	}
}

Finally, you need the Slf4jMembersInjector, which does the actual injection:

import java.lang.reflect.Field;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.google.inject.MembersInjector;

public class Slf4jMembersInjector<T>  implements MembersInjector<T> {
	private final Field field;
    private final Logger logger;
    
    Slf4jMembersInjector(Field aField) {
    	field = aField;
    	logger = LoggerFactory.getLogger(field.getDeclaringClass());
    	field.setAccessible(true);
    }
  
	public void injectMembers(T anArg0) {
		try {
			field.set(anArg0, logger);
		} catch (IllegalAccessException e) {
			throw new RuntimeException(e);
		}
	}
}

Now you just need to bind your TypeListener inside your module class:

   bindListener(Matchers.any(), new Slf4jTypeListener());

The actual usage is simple, but instead of @Inject we need to use @InjectLogger :

    @InjectLogger Logger logger;
Posted in guice, java | Tagged , , , | 2 Comments

got the Imon VFD Display (15c2:0036) finally working

The IR part of the SoundGraph 15c2:0036 VFD/IR device was working for quite some time now after a couple of big updates hit lirc (currently it works for the 0.8.5+ cvs version)

One small quirk was still present, the vfd display would not work all the time. There were allways entries like

lirc_imon: vfd_write: send packet failed for packet #1

in the dmesg logfile and the screen would become garbled from time to time.
Turns out to be a timing issue, with a small patch to lirc_imon everything works perfectly !

diff -u -r1.105 lirc_imon.c
--- drivers/lirc_imon/lirc_imon.c       5 Aug 2009 01:17:26 -0000       1.105
+++ drivers/lirc_imon/lirc_imon.c       23 Aug 2009 08:49:15 -0000
@@ -625,8 +625,10 @@
        }

        kfree(control_req);
+       set_current_state(TASK_INTERRUPTIBLE);
+       schedule_timeout(10);

-       return retval;
+       return retval;
 }

 /**

The original patch hat the timeout to 50 jiffies, but that lead to a quite slow updating of the vfd display, commands did take up to two seconds to display on the vfd. Mabye it even works with lower values, but it seems fast enough for me right now.

Next thing to do is to find a way to display the current mpd information on the vfd, the lcdproc way seems a bit oversized. For mpd there is a nice ruby package on http://librmpd.rubyforge.org/ , so getting the information is not a big problem.

Posted in linux | Tagged , , , | Leave a comment

Easymock … or taking the pain out of (j)unit tests

What are Unit tests?

Unit tests are the lowest level of testing, they are whitebox tests, which means you know what the code does do. Unit tests are for testing methods of a class independently without relying to other classes or services.

This sounds good in theory, but can lead to many problems in real life, especially in J2EE enviroments, as the class you are testing depends on lots of other things, from a database connection to http request parameters.
This is one of the reasons why Unit Test Frameworks like JUnit are often (mis-)used as integration tests, for example by providing a database connection in your test. But with a database connection what are you testing ? Your code ? the state of the Database ? The OR-Mapper ? This would be a Black-Box test with too many unknown and out-of-scope variables, you would never know where in which part the error could be.
Thats why it is still very usefull to have unit tests, even if they seem hard and cumbersome to write.

Enter Easymock

With easymock it is possible to mock every Object that has an Interface. Mocking means that the actual implementation of the interface will be replaced with a dummy placeholder class that you can fill with behaviour. For example you could create a mock from HttpServletRequest which you could use to pass request parameters to your class. Another example would be to replace the database connection classes (DAOs) with mocks, so your test class does need a database anymore.

Lets look at an easy example:

public class User {
    public String name;
    public String password;
    public Integer active;
}
public interface UserDao {
     public User find(String name);
}

The actual implementation would connect to the database and fetch the User Object with the matching name.
Now, lets think we would want to test an UserService, which would use the dao to locate an user, test if the user is active and return
the user object or null if the user is inactive. The implementation (with a dependency injection framework like guice) would look like this:

public class UserService {
   private UserDao dao;

   @Inject
   public Userservice(UserDao tempDao)  {
      dao = tempDao;
   }
  
   public User logon(String name) {
       User tempUser = dao.find(name);
       if (tempUser.active > 0) return tempuser;
       return null;
   }
}

Now this implementation has a simple bug inside, what happens if the dao didn’t find the user ? How would a testcase find that bug ? A first draft could look like this:

import static org.easymock.EasyMock.*;
...
@Test
public void testUserLogon() {
   UserDao mockDao = createMock(UserDao.class);
   fred = new User(1, "Fred",1);
   joe = new User(2,"Joe",-1);
   expect(dao.load(1)).andReturn(fred);
   expect(dao.load(2)).andReturn(joe);
   replay(dao);
   // now we initialize the service with the mocked DAO Object !
   UserService serviceToTest = new UserService(dao);
   User result = serviceToTest.logon("fred");
   assertNotNull(result);
   assertSame(1,user.id);
   result = serviceToTest.logon("joe");
   assertNotNull(result);
   assertSame(2,user.id);

   result = serviceToTest.logon("Unknown");
   assertNull(result);
}

Now, with the help of AtUnit we can simlify the code a lot:

@RunWith(AtUnit.class)
@MockFramework(MockFramework.Option.EASYMOCK) // tells AtUnit to use EasyMock
@Container(Container.Option.GUICE)
public class UserServiceTest {
   @Inject @Unit UserService serviceToTest;
   @Mock UserDao dao;

   @Test
   public void testUserLogon() {
      fred = new User(1, "Fred",1);
      joe = new User(2,"Joe",-1);
      expect(dao.load(1)).andReturn(fred);
      expect(dao.load(2)).andReturn(joe);
      replay(dao);
      User result = serviceToTest.logon("fred");
      assertNotNull(result);
   }
}
Posted in java, Testing | Tagged , , , | Leave a comment

Aop with Guice

Aop has many uses, from doing transaction handling to authorisation. With the powerfull Guice framework it is suprisingly easy.

In this example we mark methods from a class with a Annotiation marker use Guice to intercept them.

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD)
public @interface InjectedTransaction {

}

Now, we mark a method of some class with it:

public class BasicDummyService {

 @InjectedTransaction
 public void testService() {
 System.out.println("doing testService");
 }
}

Now, we need an Interceptor, which will be called as a proxy instead of the real service:

import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;

public class AopInterceptor implements MethodInterceptor {
 public Object invoke(MethodInvocation invocation) throws Throwable {
 System.out.println("Interceptor: before");
 Object o =  invocation.proceed();
 System.out.println("Interceptor: Done !");
 return o;
 }
}

Finally, we need to use Guice’s Module to configure the binding:

import com.google.inject.AbstractModule;
import com.google.inject.matcher.Matchers;

public class AopModule extends AbstractModule {

 @Override
 protected void configure() {
 bind(BasicDummyService.class);
 bindInterceptor(Matchers.any(),Matchers.annotatedWith(InjectedTransaction.class),new AopInterceptor());
 }
}

Thats it ! Now, when you call the testService() method from a guice injected class, you’ll get the following:

Injector injector = Guice.createInjector(new AopModule());
 BasicDummyService test = injector.getInstance(BasicDummyService.class);
 test.testService();

Result: 

Interceptor: before
doing testService
Interceptor: Done !

The Guice Wiki also has a nice article about AOP with Guice.

Posted in aop, guice, java | Tagged , , , | Leave a comment

a thin god

Nothing spirtual here, but the cool monitoring framework god together with thin. You’ve probably read a lot of benchmarks about thin, but whats the fastest webserver when you can’t ensure they are up and running ? Right here is where god comes into play. God is (beside the silly name) a nice process monitoring framework with a ruby configuration script. You can easily start/stop/monitor daemon processes and put advanced flags on them, like CPU or memory usage.

The default god startup file looks quite neat if you only run one site, and is easily converted to work together with thin, as thin and mongrel share the same parameters. Now i don’t have the luxury of only supporting one site but rather quite a buch of (mostly small and inactive) sites which share quite a lot of critera. Creating this kind of config file for each of them seems quite exessive and against the rails DRY principle.

Now, with a recent thin version (I did try 0.6.3) you can have the configuration for each site in a seperate YAML file, there’s even a small god startup script included in the examples directory.

Now first, lets take a look at the thin yaml config of a test ruby installation, in this case test.yml in /etc/thin :

servers: 3
user: www-data
group: www-data
chdir: /var/www/rails
pid: tmp/pids/test
port: 8000
address: 0.0.0.0
log: log/thin.log

Most is pretty self explainig, the pid and logfile is relative to the startdir, and the uid/gid is set to www-data (the default on ubuntu machines)
Unfortunately the thin.god startup file in thin 0.6.3 seems to have a small bug when it comes to allocating ports (or i’m doing something wrong, but the config file works flawlessly with just thin), so here’s a fixed version of the thin.god file:

# == God config file
# http://god.rubyforge.org/
# Author: Gump
#
# fixed thin ports <michael@glauche.de>
#
# Config file for god that configures watches for each instance of a thin server for
# each thin configuration file found in /etc/thin.

require 'yaml'

config_path = "/etc/thin"

Dir[config_path + "/*.yml"].each do |file|
  config = YAML.load_file(file)
  num_servers = config["servers"] ||= 1
  for i in 0...num_servers
    God.watch do |w|
      w.group = "thin-" + File.basename(file, ".yml")
      port = config["port"] + i

      w.name = w.group + "-#{port}"

      w.interval = 30.seconds

      w.uid = config["user"]
      w.gid = config["group"]

      w.start = "thin start -C #{file} -o #{port}"
      w.start_grace = 10.seconds

      w.stop = "thin stop -C #{file} -o #{port}"
      w.stop_grace = 10.seconds

      w.restart = "thin restart -C #{file} -o #{port}"

      pid_path = config["chdir"] + "/" + config["pid"]
      ext = File.extname(pid_path)

      w.pid_file = pid_path.gsub(/#{ext}$/, ".#{port}#{ext}")

      w.behavior(:clean_pid_file)

      w.start_if do |start|
        start.condition(:process_running) do |c|
          c.interval = 5.seconds
          c.running = false
        end
      end
      w.restart_if do |restart|
        restart.condition(:memory_usage) do |c|
          c.above = 150.megabytes
          c.times = [3,5] # 3 out of 5 intervals
        end

        restart.condition(:cpu_usage) do |c|
          c.above = 50.percent
          c.times = 5
        end
      end

      w.lifecycle do |on|
        on.condition(:flapping) do |c|
          c.to_state = [:start, :restart]
          c.times = 5
          c.within = 5.minutes
          c.transition = :unmonitored
          c.retry_in = 10.minutes
          c.retry_times = 5
          c.retry_within = 2.hours
        end
      end
    end
  end
end

Note, there was a bug with older god installations and ubuntu installations, but they worked flawlessly for me with the 0.7.0 god release. (there were some problems with the 0.5.0 release and ubuntu, so check if you have an up to date god version)

It will look in /etc/thin for all .yml files and will start and supervise them accordlingly. To start up the monitoring use

 # god -c thin.god

After its up and running you can check the status of your servers with:

 # god status

That command should give the following output:

thin-test-8000: up
thin-test-8001: up
thin-test-8002: up
Posted in ruby on rails, thin | Tagged , , , , , | 3 Comments

How many Thin server instances are best ?

I wondered about how many parallel thin you need to get the optimal throughput. I often saw values from 3 to 10 servers in different configuration examples around the net. Time to get some real numbers.

To get a meaningful benchmark i took an old dual 1,3 GHz P3-s system with the latest ubuntu. I did install rails 2.0.2 and a testing project with it, then created a very simple controller, that just passes a “hello world” string to the view. Also, no database connections should be made.

The server is started with

rake thin:cluster:start RAILS_ENV=production SIZE=5 PORT=8000

You’ll need the latest thin 0.5.2 for it, as 0.5.1 has some serious bug that prevents it from running daemonized. The command above will start 5 thin instances, listening on ports 8000-8004. To actually benchmark the number of thin instances used, i did modify the upstream entries in nginx:

# only one server used in this example
upstream thin {
   server 127.0.0.1:8000;
   #server 127.0.0.1:8001;
   #server 127.0.0.1:8002;
   #server 127.0.0.1:8003;
   #server 127.0.0.1:8004;
}

The results are not really supprising and show that the nginx + thin combo scales well even up to 100 concurrent users. All Tests were done with

ab -n 10000 -c 20 http://localhost/foo/

where the “foo” controller was the minimal one returning the “hello world” page.

The results can be seen in this graph:

Nginx & thin Benchmark

As you can see, the best results for this dual cpu machine was indeed running two instances of thin. What is interesting to see that thin+nginx scale quite well over many concurent requests.

Of course this is an idealized test with minimal load times for each pagecall. In the next version i’ll put in a random delay before the template is rendered to simulate real wold loading times and slow operations (users up/downloading large files, etc ..)

Posted in nginx, ruby on rails | Tagged , , , , , | 1 Comment

Thin + Nginx with Rails

Recently i’ve been playing around with xen and different hosting solutions, and i was wondering about lightweight, yet performant replacements for the usual apache + mod_fcgi + dispatcher stack. I did toy around with nginx before, together with mongrel with quite good success.

But it seems there is some serious competition for mongrel coming along, the Thin webserver. It combines the good points of mongrel, the HTTP parser, together with an event driven IO framework called eventmachine. The Bechmarks of it look promising:

Rails Webservice Benchmark

Its clearly to see the event driven IO approach is clearly superiour to the others when having many concurent requests.

Now why nginx when thin already performs so good ? Well, for one thing, thin is still in its quite early stages of production. Also, when it comes to serving static files the ruby version does not even come close.  Another thing that comes to play is that nowerdays most CPU’s are multi core, while ruby is single threated. That means the concurrent requests will be served by one CPU.

Here nginx comes to play, its an excellent http server, proxy and load balancer. One could start serveral rails servers with mongrel (mongrel_cluster is an excellent tool for that), or many thin servers. Right now i’m using the following rake task (from Stepehn Celis) for starting thin:

namespace :thin do
  namespace :cluster do    desc 'Start thin cluster'
    task :start => :environment do
      `cd #{RAILS_ROOT}`
      port_range = RAILS_ENV == 'development' ? 3 : 8
      (ENV['SIZE'] ? ENV['SIZE'].to_i : 4).times do |i|
        Thread.new do
          port = ENV['PORT'] ? ENV['PORT'].to_i + i : ("#{port_range}%03d" % i)
          str  = "thin start -d -p#{port} -Ptmp/pids/thin-#{port}.pid"
          str += " -e#{RAILS_ENV}"
          puts str
          puts "Starting server on port #{port}..."
          `#{str}`
        end
      end
    end
desc 'Stop all thin clusters'
    task :stop => :environment do
      `cd #{RAILS_ROOT}`
      Dir.new("#{RAILS_ROOT}/tmp/pids").each do |file|
        Thread.new do
          if file.starts_with?("thin-")
            str  = "thin stop -Ptmp/pids/#{file}"
            puts "Stopping server on port #{file[/\d+/]}..."
            `#{str}`
          end
        end
      end
    end
  end
end

with that you can start/stop many thin instances easily:

# rake thin:cluster:start RAILS_ENV=production SIZE=3 PORT=8000
# rake thin:cluster:stop

Getting those instances into nginx is also easy, the following example layout works with the ubuntu nginx package:

upstream thin {
    server 127.0.0.1:8000;
    server 127.0.0.1:8001;
    server 127.0.0.1:8002;
}

server {
        listen   80;
        server_name  localhost;
        access_log  /var/log/nginx/localhost.access.log;
        root /var/www/test/public;

        location / {
                proxy_set_header  X-Real-IP  $remote_addr;
                proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header Host $http_host;
                proxy_redirect false;
                if (-f $request_filename/index.html) {
                        rewrite (.*) $1/index.html break;
                }
                if (-f $request_filename.html) {
                        rewrite (.*) $1.html break;
                }
                 if (!-f $request_filename) {
                        proxy_pass http://thin;
                        break;
                }
        }
}

To see if it is really working i used apache Bench on the the same setup with a simple dynamic page on a dual cpu p3-s 1,3 Ghz machine :

# ab -n 1000 -c 10 http://10.1.4.99/foo/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.1.4.99 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Finished 1000 requests

Server Software:        nginx/0.5.26
Server Hostname:        10.1.4.99
Server Port:            80

Document Path:          /foo/
Document Length:        59 bytes

Concurrency Level:      10
Time taken for tests:   6.247127 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      505000 bytes
HTML transferred:       59000 bytes
Requests per second:    160.07 [#/sec] (mean)
Time per request:       62.471 [ms] (mean)
Time per request:       6.247 [ms] (mean, across all concurrent requests)
Transfer rate:          78.92 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.7      0       9
Processing:     7   61  72.9     35     453
Waiting:        0   61  72.9     34     453
Total:          7   61  72.9     35     453

Percentage of the requests served within a certain time (ms)
  50%     35
  66%     59
  75%     82
  80%     97
  90%    134
  95%    203
  98%    333
  99%    380
 100%    453 (longest request)

The same on one of the internal thin servers would give the following result:

# ab -n 1000 -c 10 http://10.1.4.99:8000/foo/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.1.4.99 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Finished 1000 requests

Server Software:
Server Hostname:        10.1.4.99
Server Port:            8000

Document Path:          /foo/
Document Length:        59 bytes

Concurrency Level:      10
Time taken for tests:   7.880520 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      446000 bytes
HTML transferred:       59000 bytes
Requests per second:    126.90 [#/sec] (mean)
Time per request:       78.805 [ms] (mean)
Time per request:       7.881 [ms] (mean, across all concurrent requests)
Transfer rate:          55.20 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:    63   78  50.9     63     252
Waiting:       61   77  50.8     62     251
Total:         63   78  50.9     63     252

Percentage of the requests served within a certain time (ms)
  50%     63
  66%     63
  75%     64
  80%     64
  90%     65
  95%    251
  98%    251
  99%    252
 100%    252 (longest request)

As you can see, the nginx version scales quite a bit better with 10 concurrent users.

Now what is really missing is some nice scripted integration into startup scripts, so you can automaticly start/stop many nginx/thin installations at bootup (and not execute some manual rake tasks :))

Posted in nginx, ruby on rails | Tagged , , , | 6 Comments

using extjs in rails part 2

creating a simple logon window

First, we need a simple view for our index defined in part 1

index.rhtml in views/login

<script type="text/javascript"
   src="/javascripts/login.js"></script>    

<p>Here comes the Content which
will be used after the user logged on.</p>

This is just some dummy content that will be “blocked” by a modal extjs window. There can be any HTML content inside it, which will be disabled, but would be visible via the html source code, so aditional client checks are neccessary if it should work as a secure formular.

Now the interesting stuff is of course the login.js itself:

var loginForm = new Ext.form.FormPanel({
    baseCls: 'x-plain',
    labelWidth: 75,
    url:'/login/doLoginTest',
    defaultType: 'textfield',
    items: [{
        fieldLabel: 'Username',
        name: 'name',
        anchor:'90%'  // anchor width by percentage
    },{
    fieldLabel: 'Password',
    name: 'subject',
    anchor: '90%'  // anchor width by percentage
}],
buttons: [{
    text: 'Login',
    handler: function() {
        loginForm.getForm().submit(
            {
                method: 'GET',
                waitMsg:'Submitting...',

                reset : false,
                success : function() {
                    loginWindow.close();

                },
                failure: function(form, action){Ext.Msg.alert('Error',action.result.text)}
            });
        }
    }]

});

var loginWindow = new Ext.Window({
    title: 'Login',
    width: 300,
    height:140,
    closable:false,
    minWidth: 300,
    minHeight: 140,
    layout: 'fit',
    plain:true,
    modal:true,
    bodyStyle:'padding:5px;',
    items: loginForm
});
Ext.onReady(function(){
    loginWindow.show(this);
});

Now lets get through this step by step:
The first function that is called is Ext.onReady, which is the startup function that is called by the extjs toolkit after the page has finnished loading and the toolkit did initialize. The function shows up the loginWindow, which was declared earlier by the line “var loginWindow = new Ext.Window({“. Parameters are always passed as a Javascript Object {param1,param2,param3}, which might look a bit confusing at first, but is very practical to set different parameters. The real interesting parmeter in loginWindow is the “items: loginForm” line, it does define what will end up inside the window, here its a FormPanel. The rest are basicly only stlye information to make it look like a window.

Now, the “loginForm” Object contains the real interesting stuff. You’ll probably recognize the style stuff from the loginWindow object, along with the “items:” line. But here we are not referencing another Object, but just inline the things we need as a Javascript Object again, but this time as an array of Objects, noted with the “items:[{object1},{object2]” line. The “buttons:” line behaves the same, but uses button Objects with onClick handlers instead.

handler: function() {
        loginForm.getForm().submit(
            {
                method: 'GET',
                waitMsg:'Submitting...',

                reset : false,
                success : function() {
                    loginWindow.close();

                },
                failure: function(form, action){Ext.Msg.alert('Error',action.result.text)}
            });
        }

This javasctipt parts creates an Ajax submit form for the “Login” button. Extjs does take care of all houskeeping and wait messages, etc, you just need to tell what to display. The “success” and “failure” functions are user defined and send back from the controller in form of an json hash. Now the controller looks like this:

   def doLoginTest
      headers["Content-Type"] = "text/plain; charset=utf-8" 

      puts params[:name]
      if (params[:name] == "mg")
         data = { :success => 'true'}
      else
         data = { :failure => 'true', :text => "Username or Password wrong !"}
      end
      render :text => data.to_json, :layout => false
   end

As you can see, its a very secure mechanisim thats basicly foolproof and impossible to guess 🙂 But all joking aside, you can see the result is gathered into a ruby hash or array, which is later then converted to json. For this to work you need either rails 2.0 (which is recommened because it makes life a lot easier with activerecord and json), or you need the json gem and put a

require 'json/objects'

on the top of the controller. Using this mechanic you can transfer all kinds of data from your rails controller to the extjs frontend. In fact all communication between extjs and rails is basicly json, but this will be covered more in part 3, where we’ll have a more in deep look into getting some activerecord data into extjs

Posted in extjs, ruby on rails | Tagged , , , , | 1 Comment