Dienstag, 29. Juni 2010

Simple Java3 #: A conclusion

This time, I just want to wrap up some very useful "rules" one should obey when writing Java:

Methods should not return or handle null, except if they indicate an optional operation
  • For optional Operations (like Map.get(key)), it must be clearly documented. This should be used seldomly, since it cannot be checked by the Compiler and the programmer must remember to ALWAYS handle null.
  • Methods which return collections or Strings should return Collections.emptyList / "" rather than null, even for optional values.
  • Never return null if an error occurs. Throw a checked exception if the error may happen and an unchecked Exception (usually, IllegalStateException) if the error must not happen and the app cannot recover from it.
  • Set all fields of a bean with initial values in the constructor to prevent them from being null.
  • Null input parameters should result immediatly in an IllegalArgumentException, to prevent these errors from waiting under the surface.
  • Default Values: Sometimes, it is possible to define a natural empty or default value for some classes. In these cases, the empty element can be a static final constant (if it is immutable) attached to the class and should be used in all cases were normally "null" would be used. Use this whereever appropiate - its way better than using null. Default Values can also be used as fallbacks when throwing an exception would not be possible, but some kind of value must be given as a certain point if everything else fails...

Collections as input and output parameters
Since JDK-Collections are always mutable by their interface, some additional contracts must be defined:
  • Methods should by all means never modify input parameter collections. It can be done on private methods, but it shouldn't be on public ones. Instead of changing the input collection, a new one should be created and returned. The performance inpact is negibligible, but the method becomes easier to maintain and understand.
  • Public methods returning collections must always return unmodifiable Collections, usually by wrapping the result with Collections.unmodifiableList (...). This makes the method more flexible, as it can return constants like Collections.emptyList(). If a method returns a modifiable list, the result would be sometimes mutable and sometimes not (the EmptyList constant) - thats just asking for trouble.
  • Beans which have collection properties MUST always wrap the results of their public getters in Collections.unmodifiableList, if the list is mutable. Collection properties may only be changed by setters - which replace the collection with another one - and explicit public mutators like public void add(...) on the bean itself. Changing collection properties in-place has countless side-effects and is very hard to maintain.
  • Whenever a collection of exactly one element should be created, then it should be done via Collections.singleton(Object).
  • If mutable collections are used as static fields or in singletons (most common static field: a (caching) Map), then the field  must be synchronized via Collections.synchronized(...). or the access must be synchronized by hand. The later is the better choice, as it allows a more fine-grained control over the synchronization blocks. In general, just try to avoid mutable static collections, if you do not need them for performance reasons.
  • Try to use google collections. Google Collections have the ImmutableList/-Set/-Map, which allow it to create immutable collections without wrappers - yielding nice performance benefits on the way. These are so important that its really a pity that the JDK does not support these directly.
Use Unchecked Exceptions to signal programming errors
If it is possible that your program runs into an unstable / unexpected state, then you must throw a runtime exception indicating that there's a programmer's error in there. 
  • Throw an IllegalArgumentException if your input is bad, esp. for null input values. Write static convenience methods for null checks, and use them in all setters and constructors where a null-check is needed.
  • Throw an IllegalStateException if your program is in an invalid state (bad assigned fields, wrong execution order..). This usually replaces returning null as an idiom if something fails.
  • Throw an UnsupportedOperationException if you cannot support a method derived from an interface, like write operations on immutable objects.
  • Throw an IndexOutOfBoundsException for crappy index accesses.
  • Throw an ClassCastExceptoin (or let the jvm do that for you in an unsafe cast) if your have the wrong object in your hands.
Of course there are more, but these are the most common... In general, all of them except the IllegalArgumentException indicate a (minor) error in your design: A good design tries to enforce compile-time-safety rather than handling problems later with runtime-exceptions: 
  • IllegalStateExceptions can be avoided by using immutable objects as much as possible (see below), as they do not have states.
  • UnsupportedOperationExceptions can be prevented by defining "ReadInterfaces" and "WriteInterfaces" seperately. In general, the WriteInterfaces then inherit from the "ReadInterfaces" - also, try in general to define "thin" interfaces rather than rich interfaces. the JDK collections-library is a perfect antipattern for this.
  • IndexOutOfBoundsException: Index-Based-Accesses are in general a problem of the design. You should never rely on positions in your arrays, lists or strings. Instead, create your own classes which handle these problems. Also, you should never iterate within a "for"-loop on an index. for Lists: Use the iterator ( JDK 1.5: java's foreach) instead.
  • ClassCastExceptions, instanceof checks and casts are the bastard-stepchilds of the pre-JDK1.5 era. Starting from JDK 1.5, nearly all these usecases can be cleanly rewritten by using compile-time-checked generics. Also, you should never ever use collections with different kinds of objects in them!
At its best, a program relies solely on IllegalArgumentExceptions for null-checking and the like to fortify its code base and the maintainability of the code.

Do not swallow checked exceptions - re-throw them unchecked instead!
Everybody tells java programmers that they must not swallow exceptions. Good.
But: Nearly no one tells them what to do instead.
Its no wonder that a lot of programmers rather write empty catch blocks (perhaps with a log statement, if they where i a nice mood) for checked exceptions which they think can never happen.. Unless the program is really screwed or there is an error in the logic.
The solution, however, is very simple. Wrap them in a runtime exception, like this:

try{
  Class.forName("horst");
}catch(ClassNotFoundException e){
  // have to die if this would ever happen, but need to catch it
  throw new IllegalStateException(e);
  // If you have the apache-commons-lang.jar available, you should prefer using this:
  // throw new UnhandledException(e);
}

Nice, hu? That way, you tell the world that this is an unrecoverable condition and by sure a programming error.
However, whenever you do this, you however have to remind the following: Never, ever do this on a condition which may fail, and does not just indicate a programming error if it is thrown:
If the programmer has made a mistake, a RuntimeException and an app crash are appropiate.
If however a File reading failed or something else from the outer world, you MUST handle that exception somehow.
Of course, you can always start by simply throwing the RuntimeException as above, but that should only be a temporary first solution.


Use Immutables
Always declare your fields final initially, and try to keep this wonderful modifier at all costs, since it helps to make your code understandable.
In general, it's a good idea to separate between immutable "value" objects which hold your abstract data, and "state" objects which have only internal fields which are not accessible, but know how to process these values.
This is a natural functional way to split (immutable) data from logic and makes your programs very fast (as immutables can be shared), easy to maintain and quick to understand at compile-time. Also, this solves tricky problems which can occur when defining hashCode on mutable fields..

Simple rule: Mutators (Setters) are evil. 
Another rule. Immutables are not SLOW and not a matter of performance. You can always come up with fast immutable algorithms when needed. Do not sacrifice the maintainability of immutables just for the sake of performance (like the JDK did with its collections..)

Builders for Immutables
Sometimes, it its impossible to initialize an Immutable class just with the constructor - or it is clumsy and error-prone, since that "constructor" has a dozen or so fields, and you do not want to separate the object into smaller units.
In these situations, do not define setters as a last resort. Instead, define an inner class called Builder which allows it create objects of that class more easily. By convention, all method names of builders are those of the properties which are set, and a builder will always return a reference to itself by all calls (to allow method-chaining)

Other useful rules are to define no getters on them (builders are read-only), and a toString method which builds the object and then outputs it with toString (for testing).

class Foo{

  public static class Builder{
    private String name;
    private int value;
    public Builder name(String name){
      this.name = name;
      return this;
    }

    public Builder value(int value){
      this.value = value;
      return this;
    }
    
    public Foo build(){
      return new Foo(name, value);
    }


    /** Common toString operation for all builders.*/
    public String toString(){
      return build().toString();
    }
  }








  public static builder(){
    return new Builder();
  }

  private final String name;
  private final int value;

  private Foo(String name, int value){

  ....
  (constructor body, getName, getValue)
}

Obviously, using the builder pattern is very verbose.
Its definitely not suited for simple beans as the one i have described here, but rather for more complex scenarios when you have to initialize immutable beans with collections. However, its better to be verbose (java beans are that anyway) than writing setters which make your code harder to understand and test.

Equals / Hashcode

  • Do not define equals / hashCode on mutable fields
Never include mutable fields into your hashing / comparision methods, as changing the values in-place will screw all HashSets / Maps where those values are resided in.
Also, its very hard to understand code which compares values with mutable states, as these comparisons could yield unexpected results in a multi-threading scenario.
  • Use it only for "natural comparison"
Nearly as important: Do not define equals / hashCode for anything else than a "natural" comparision of values in collections. For anything else, write your own named methods for "special" comparisions. There are FEWER applications for hashcode/equals than most programmers think.
  • Do not write your own custom logic
Rely on ides to build it for you by defining the fields. Everything else is very hard to read. One exception: Commons-lang has hashCodeBuilder and equalsBuilder, which can be both used for this comparison.
  • Implement proxies for equals / hashCode, when needed
If you have a value with mutable fields which should be put into a collection... than create a mutable "proxy" class which wraps a mutable instance. This can also be used if you want to use different "equals" implementations on a class for using it in different collections. The exact semantics depend greatly on your use case. The most common idiom are "KEY"-Classes for maps, which combine different values (strings, ints) to use them as one hashKey.
  • All statements said before also apply on "Comparable" classes.
Just as a reminder. In general, you should nearly always prefer the comparator class over implementating comparable, but for simple immutable values, also the later might make sense.

Overwrite toString and use it only for logging
ToString helps the developer (that is you) during a debug session.
Because of that, you should nearly always provide it with all fields of the object. If you do not want to write it by hand, you can use the ToStringBuilder of commons-lang, which provides a toString implementation which is based on reflection.
Never use it for functional parts of your application, except for immutable values like Strings, Integers, ..
For mutable values, always the reflective:"show-all" mechanism of commons-lang should be used instead.

Simpler Java #2 Know your RuntimeExceptions

The most common part that tends to go wrong in java is the exception handling. (Someone there how has never used an empty (or just "log") catch block. Anyone??).

To avoid the common problem, here are some very basic guidelines which should help:

Checked vs. RuntimeExceptions
  • Use checked exceptions for problems which MAY occur and which are not in scope of the app to handle (io-errors, db-errors). These must be handled in a suitable way, usually you must also inform the user about these problems.
  • Use unchecked exceptions whenever a problem occurs which cannot be handled in any way. The app might handle it on the highest level (users don't like GUI's which crash into nirvana), but in general the programmer has made a serious mistake here which must be solved in the code.
Exception Wrapping
Sometimes, you encounter a checked exception for something you know which must never go wrong in your app and which you do not want to handle therefore. An example are io-errors on stream.close() or checked exceptions on reflection or cloning (the last ones are flaws in the jdk design. ClassNotFound is not a checked exception by its definition. Usually, you can never handle this one).

In these cases, do this:
try{
  // something that throws classnotfound
}catch(ClassNotFoundException e){
  throw new RuntimeException(e);
  // throw new org.apache.commons.lang.UnhandledException(e);
}

This wraps your checked exception in a runtime exception.

Do not return null
I said it in the last post, and I say it again: Null is not an appropiate return value if something went wrong. Use either a checked exception if an outer condition fails OR an unchecked one (more common) if the programmer made a mistake. Never return null just if something went wrong.
Null is reserved for lookup with maps and the like in java (and even thats crappy), and for nothing else.

Returning null has one good point, though: Its easy to wage the quality of source code simply by counting how often null is used, and why. ;-)


JDK- Unchecked Exceptions
There are some JDK-Exceptions which should be used in some of the most common idioms in java programming:
IllegalArgumentException: Most common exception ever. Use this for all kinds of invalid inputs, like null values  (DONT use NoPE here!)
UncheckedOperationException: Used mostly for interfaces which allow mutation (java.util.List) but the concrete implementation does NOT (unmodifiable List).
IllegalStateException: Throw this one if your objects have a certain order in which their operations must be done, and this order has been broken.
IndexOutOfBounds: Very seldom should your implement your own indices. If you do, however, then this is an appropiate exceptions.

Why you should try to avoid using unchecked exceptions
Last paragraph told you HOW to use unchecked exception. Now I tell you the following: The worse your API design, the more you need to deal with these and the more you need to rely on Runtime safety rather than compile time saftey. UncheckedExceptions report problems which should not have been happened. The better your ApplicationDesign, the more of these can be converted into Compiletime problems.

Some tipps:
  1. Don't use interfaces which throw UnsupportedOperationExceptions.. Make smaller interfaces instead and use inheritance! (Most common example: ReadInterface, WriteInterface extends ReadInterface). The JDK-Collections seriously got this one wrong!
  2. Avoid Objects with complex state - and thereby the IllegalStateException. Use Immutables (more on this one later.)
  3. Use IllegalArgumentExceptions heavily! IllegalArgs make your program more robust, and they are easy to avoid and should not occur at runtime. Throwing these early can save you great deal of work instead of searching for the reason why a NoPE or something occured!
  4. Always use the exception if you have to. The last points told you HOW to avoid the situations. If you, however have a lot of states, you must throw that IllegalStateException or everything gets much worse. However, the sole fact that you have to do this should at least make you think about your API design.
  5. Avoid Indexing on Collections. No Index Access -> No IndexOutBounds. Seems logical, right? Try to avoid indexing at all cost. Most of the time, indexing can be done simply better by designing specialised classes (instead of given each index position in a list or an array its own meaning), and by using the extended for loop. Starting from JDK 1.5, Lists should never ever be looped by their indices - arrays should be avoided anyway, exception for lightweight parameters.

Sonntag, 27. Juni 2010

Simpler Java #1: No more NoPE's!!

The NullPointerException ("NPE"-> "NoPE" for its fans) is for sure one of the best friends of the JavaDeveloper. Why?
  1. Null is in Java the Contract for "optional" Values, values which do not need to be there. Example: java.util.Map.get(key) returns null if the key was not found.
  2. Some people tend to extend this even further and use null as the default return if the function call fails (rather than throwing an exception).
  3. Variables in beans are not always initialized when the default constructor is used, which results in other NoPE pitfall.
  4. Sometimes, null is also used to say "nothing". The difference between one and this one is crucial: Optional values are KNOWN to be not there, "nothing" instead is a pit waiting for a fall. It requires the dev to test for null every time explicitly. This comes to its worst when used with strings or or collections, which have a "natural" null value: The empty string / collection..
In all those situations, either the caller (1) might forget to check something or the api is just screwed (2-4) and gives you null to enforce You to nullcheck and clean its garbarge up - more than often, you won't, and your app says (once again): "NoPE"!

What can / should be done in these cases?
  1. For optional values, null is per se ok (since its a java-developer contract accepted by most people). However, you MUST declare it clearly in the api! And try to do it seldomly.
  2. Never return null just if an error occurs. Use _CHECKED_ exceptions! Or throw unchecked exceptions if the problem should have never been occured at all. Never return null just as a fallback: If you do not use the exception way, the program will crash with a _NoPE_ anyway, so returning null does not really improve your situation here..
  3. Try to use "immutable" beans (more on that later), or at least try to make as much values as possible ready by the constructor. All other fields with "natural" empty values (strings, collections) should be given their "empty" default. Never null. Again.
  4. Speaking of empty values: Strings and collections may never NEVER NEVER ever be null. There is the default empty value. USE IT! Do not try to make a difference between "nothing" and "empty". The only appropiate answer your program will give you is "NoPE".
  5. MAKE EMPTY CONSTANTS FOR YOUR OWN OBJECTS. And another one. All immutable beans (that is, all "VALUES") should be given an empty default constant implementation which can be used as a default return value rather than null. Decide when you need it. But most of the time a constant holding the "EMPTY" instance of your class WILL be an asset. There are always situations which you cannot really handle but where you do not want to "NoPE" your app into cyberspace.. There the default comes very often to your help.

So much for some short thoughts and experience on handling null. Burn this into your memory:
If you get null and the api says so, because its an optional value, and you NoPE: SHAME ON YOU!
If you get null and no one has told you: SHAME ON THEM!
In the second case, ALWAYS the called code must be fixed. A caller should never nullcheck unless there is a really good DOCUMENTED reason.
Nullchecking in business apps is not "hardening" your code: Its just means that you have no clue whats going on in the code you've called... (If its YOUR code and not some framework, that is.)

Donnerstag, 4. Februar 2010

What is a Scalable Language good for

Scala again. This time, I will not try to debate abouts its interns and wether its better or worse
than something. Rather, my intention is to find out where the usecases of the Scalability are.

Introduction
Scala is, in my eyes, a language which is pretty execellent at leaving nearly all powers in the hands of the programmer to use "his" scala with "his" preffered style. Unlike most dynamic languages (esp. perl), this however does not mean that scala is magic. Far from it! Instead, it just tries to look behind most of the syntax magic provided by those languages to tries to give a static-typed and logical equivaleng for it.
Those features which most languages have but which they do not offer the user at its full powers are: type inference, operator overload, switch-statements, control structures, closures, type parameters, ...

Offering this kind of power to the user seems strange when comparing this to the concept which lead to the restrictions of java in all these areas: Java contains nearly NONE of them, because java tried to do only OOP in a standard way with a standard method syntax. No operators, no closures, no confusion. And I have to say: After the mess into which C++ might have lead, this was a wise choice by Gosling, which I still think was quite clever at the time.

Java's choice of being simple is a real pain to experienced dev's, which tend to "outgrow" the language at some point of time. But since java contains all of the main aspects which are considered to be important for good coding, i.e. classes ;-), it is a language which is good to begin with. In other words: There are always new - and cheap - programmers available. Complex languages like C++ usually do NOT have this kind of comfort. They rather tend to scare the developers away. ;-)

So whats with Scala? Scala is ultracomplex, far more than C++ will ever be. However, it shines at hiding the complexity. A Scala beginner (coming from Java, I suppose) does not need to know any details about: Operator Overloading, Type Conversions and the depths of generics, sometimes even closures!
The interesting part: Even though he does not KNOW them, he will USE these advanced topics rather quickly: Why? Because + on Strings is Operator overloading (+ on Lists is as easy to understand), implicits are everywhere behind the scenes and generics are nothing else then some simple infos which must be given to know which elements this list holds.
Type Inference? Scary word. var a = "string"? Ok. Understand. Looks like a dynamic language.
And so on.

I you have missed my point: The brilliance of scala is that there are dozens of feature which novice programmers can just use, but do not need to understand. They can just treat them like language constructs and might be very happy the day they find out on how to tailor them for their needs. This GREATLY helps to smoothen the long and steady learning curve which would be needed to learn all of the features.
Also, sometimes the design was just very clever. Java still envies C# for properties sometimes. Scala says: Fields are functions out of the box. No property syntax needed. At least no keywords, since we _CAN overload the operators, right?

So... in my eyes, Scala really cut the boundaries between the advantages of java - easy to learn and understand at first glance - while also offering nearly limitless power when needed.

At least in theory. In pratice, the combination of implicits and closures can be a real PAIN to any novice programmer. And the interoperability with java is, unlike some sources state, actually worse than groovy's for beginners. This has three very important reasons:
  • Groovy is a dynamic language. Since java is static, it can call java code in nearly any way and there a no "conventions" and different semantics to consider.
  • Scala has its own collections. While this is good, as Scala is not java and its collections are very different, it really sucks that there are - in 2.7.7 no default conversions between the BASE datatypes enabled. And with "BASE" i mean no conversion for Iterator or at least Iterable. I cannot really understand this, since Iterable is basically a language construct for java and VERY important to support out of the box for scala for-loops. And there should be some easy ways to at least explicitly convert between scala and java in predef. This would greatly help scala beginners which will usually stick with java collections or at least use libraries which use them.
  • Scala's generics suck. Na, not really. But they DO suck when using them with java raw types. And believe it or not, this happens more often than not! In short terms: There are situations which are completely valid in java but just hideous in scala, as scalas generics are VERY strict and concise to prevent type errors. Problems in this corner usually arise when trying to store scala closures in raw java list or something like that. Seems strange but can happen. And will shred your brain to pieces AND consume a lot of time.
As for me, these two main points are actually the only real problems I faced when learning it. I guess some people will also add closures. Might seem strange, but I am thinking not of people which come to scala but those which probably HAVE to (from Java) since their ProjectManagement wants them to use it. And thats leads back to our main question:

When are Scala capabilities actually good for project? Where are the advantages:
  • Scala can be tailored to need. This can cause problems (confusing overloads and imports) but also greatly simplify the readability of code. General, this is an advantage for 2 reasons:
  1. Framework Developers (at least open-sourcerers) tend to spend a great deal of time trying to make their API pleasureable and easy to use for programmers. Since scala offers more tools, apis can be learned quicker than in java, i presume. Consider Near-DSL like APIs like "specs".
  2. The danger of the abuse of the language can usually be reduced by a good project manager. Business applications and the like do not really need to hop on the trend of functional languages to create 1-line transactions which are perfect but Write-Only. (this was was not fair...)
  3. Of course, if the project manager is an idiot or does not re-read its programmers code and check for conventions, this will lead right into hell. But a bad pm is always a highway to hell, even without java. No real difference.
  • The power of the language will IMHO help developers to make progress in the art of programming much quicker than simpler languages. In Java, the cap when there is nothing new to learn is reached rather quickly. Since learning nothing is boring and there are real essentials still missing in java, this might have lead to the trend of some frameworks to include their own non-java DSLs. And countless xml-accents, of course. Happy learning. ;-)
  • Scala feels like a dynamic language, but it still offers the performance of a static language. Much more important is however that it also offers minimal type annotations. Dynamic lang fans might consider this overhead, but it might actually just reduce documentation overhead: A good scala source with meaningful names does not need documentation hints telling which params are expected for a function. Dynamic languages usually WILL.
  • Some features of dyn-languages are impossible for scala, of course. Others can be emulated via implicit conversions. For most users, i guess the only real relevant missing feature might be the legendary useful art of duck-typing. Its sad, but inner classes and interfaces have always been there to fight this problem.

Scala-Revisited

Currently, I am evaluating Scala again and compare it to other popular jvm languages, like Groovy, or even Java. My Question is: Is there a room for such a complicated, yet extremely powerful language like scala?

Obviously, i am inclined to scream: "Yes, goddammit! Give it to me! My toy". Hm.. Maybe I am exaggerating a bit lately, as most people do when comparing competition products. I REALLY like scala.

However, when being an experienced programmer, it is easy to be emotional about the personal "favorite" language, but there are usual very good reasons why My fav is not everybody's else fav. There are dozens of good languages out there which no one really use, and one seriously has to ask: why? One example: Eiffel.

I guess one of the main aspects behind this is the momentum which the larger programming communities create (like Java), but that one is easy: If you give a good platform which everybody uses, you will soon have libraries which make the language more popular for everybody and so on and so on... Luckily, the jvm platform, like .NET, defeats this problem: Scala can use Java-Classes without any problems.

But.. That is not the problem of scala. Lately, I am reflecting more and more about the "complexity" of the language and what it means for the average programmer (like me). Speaking along the lines of Odersky, Java is a cathedral. It hides some features from the user which had been known for abuse and complexity which "burdens" the programmers. Scala, on the other hand, is a bazaar: It offers nearly everything a programmer could think of as a language construct (even implicits). Effectively, it makes the task of choosing the right tool for the job harder, as they are much more tools to choose from (operator overloading, closures, implicits). Also, one has to be wary that not all programmers in the team might want to use everything the bazaar offers.

The java's cathedral advantage is that the complexity is lower, the bazaars advantage is the flexibility - with the additional weight of deciding which goods from the bazaar are fine and help, and which will prove to be a burden which is better left out.

What about groovy? Well, groovy I see as Java's "sidekick" - It resembles its original pretty much but is tailored for the situations where java's cathedral's vision are not approiate to solve a problem elegantly. (And in time..) Its a kind of "special force" or "inquistion" for situations where features are important.

So... my next logical step will be to compare java/groovy vs scala. Not based on features but on the intention to find out which targets they are trying to achieve, and how.

Montag, 17. August 2009

Vl - GWT to Appengine - JPA Issues

-- Appengine and JPA ---
One of the things that bugs me about GWT's appengine JPA support is the fact that owned many-to-many relationships are not possible.

The only way to break this issue would either be to use unidirectional relationships (not always a suitable option) or to create a Mapping class much like a Mapping table.
Of course, creating such a class and correctly connect it with appropiate one-to-many relationships is not easy either.

Conclusion: For all simpler usecases I will use unidirectional connections:

Examples:
AgeGroups and Competitions: Competition -> AgeGroups, as the other way round is only needed to associate agegroups to a competition, a very seldom usecase.
Events and Competititions: Event -> Competition. Hierarchically seen, its quite common to list the competitions for an event, but not to which events a competition belongs.

Competition - Participants. A mapping class must be created anyway because of the special values. The following Relationships can be found in the participation class:
unidirectional OneToMany: Each P. belongs to an event, but the events do not need to know all participations
bidirectional OneToMany: Each P. belongs to a competition, an each competition knows all of its participants

Luckily, that seems all. I feared that many to many would pose much more problems.

Still questionable is the construct participation: Maybe it would be easier to create a join object for competition and event, but on the other hand, it should be enough to restrict that via the ui and the daos.

-- GWT and JPA --
If it works as specified, combining JPA and GWT should be no problem. I either must declare a new package for the JPA-Classes (as they are not available on the client if packed in the server package) or i must find out how to create a "shared" repository with classes not connected to either the client or the server.

The last one would be cleanest solution, but I think it will be - at least for the beginning - also the most unlikely. There should be no problem for the server if everything is packed into the client package - at least i hope so.

The great advantage of these things is that I can use exactly the same objects on both the client and the server without any problems. This is extremely elegant and ensures maximum compatibility with both the database server and the javascript GUI.

VL - GWT to Appengine

A new experience i wanted to make was wether it would be possible to create a gwt application based on an appengine server, which is capable of being deployed to "offline servers".

More exactly: I want to create an application which natively runs on the appengine server with multiple users but also offers to option to create "deliverables" of the datastore and an additonal jar which could be integrated into a "normal" appserver like a jboss for situations where an internet connection is either not possible or where it would be too risky too depend on both the internet and the appengine to be reachable.

The application is question is a re-implementation of a very old program dos, VL (VolksLauf). The program in question should be able to register data and participants for various running competitions taking place at the same day. For this task, a humble worker has to enter all participant registrations (currently manually) into the system about 1 week before the event starts. During the event, the program is used to retrieve the time lists and calculate the rankings as well as printing the records.

What I want is a gwt-based frontend (just a simple java gui) with powerful data input mechanisms, import filters and exports in the form of a record printer. The backend should be an appengine application with mandate-separated datablocks. In other words, multiple users can use the app at once with completely independent data. However, since the google accounts do not work offline, there must also be an offline solution usable on a "normal" webserver. This offline solution is just an export of the database with the user data only (no data from the others) and a much simpler account management (if any). This could be used for events where either internet is not possible or to reduce the dependence no both a good connection AND the appengine. It would be a catastrophe if there would be no connection at the critical day.

After the event, the data could be restored on appengine by re-importing the deliverable and overwriting all former data of the user that way.

Well.. we will see wether i have the endurance to finish the entire application or if just create some gwt widgets and then give up. Currently, I am taking care mostly of the application skeleton on the client.