Written on Monday, January 21, 2008 by Edwin Sanchez
Don’t misinterpret the title. Db4o is such a good product. I admit that I was skeptical to use it at first. But once you tried using it, you don’t want to stop. You may know other reasons from people you know why they are still not using db4o. Are you one of these? Read on. You will not use db4o unless:
- You want to take it slow. If you want to spend time mapping your objects to relational counterparts, that’s it for you. Even if there are ORM tools, you still need some time that a native ODBMS will not require.
- Your faith is in RDBMS. Some developers I know have been “doctrinated” that RDBMS is better and ODBMS is not good. We can not blame people thinking that after what happened to history. Added to this, there is this tendency of sticking to “What’s In? In is cool” and sticking to the majority. They think the vast majority is way cooler. And this vast majority is the RDBMS crowd. But it’s high time to look at the products today, especially db4o. The product is promising and there are thousands and thousands of members in the community trusting the product and the company behind it.
- You want more work and less implementations. Coding for queries, inserts, updates and deletes are very simple in db4o. This will mean more time for features to be implemented than spending time mapping objects to relational counterparts. More implemented features means good for us developers and it equates to good employee performance and satisfied customers.
- You want more time in your work and less with love ones. I remember the days when my colleagues and friends used to say “No time for love”. I was single back then and I’m spending less time with my girlfriend (now my wife) because of lots and lots of things to code. Now I have a 3-year old daughter and I need even more time for love ones. If your development tools can cut development time and your database does not require more coding, then you’ll have more quality time for your family. Db4o made my database tasks simpler.
- “But you can do away with ORM and map your table fields straight to your user interface, right?” Before I answer that, I can see that there is a group who prefers RDBMS and ORM Tools. On the other side is a crowd who prefers to use ODBMS like db4o. The main goal of both groups is to adhere with object-oriented design and principles. This is not bad. Object-oriented principles have been proven to be advantageous many times. However, there is a third crowd who will neither do ORM nor objects. They say that the overhead of doing ORM will be reduced by deviating object-oriented rules. That is, do your SQL homework, call it in C# or any language and map it to your visual controls. Actually, you can do this in Visual Studio without too much coding effort. Just drag the data controls and visual controls to your form or web page, set the properties and that’s it. You have a running application! Just to answer the question, yes, you can escape from ORM and its overhead and map it straight to visual controls. But why not solve the ORM issues by using a native object-oriented database like db4o? You adhered to object-oriented principles without hurting performance and you got rid of the impedance mismatch at the same time.
- You just don’t know db4o. When I started working with db4o, I told my team about it. Then I told my boss. I talked about it with other developers I know. I even tried a demo from one of my projects. And they think it’s cool. I was allowed by my boss to look into it. But you know what? When I talked to people about it the first time, they never heard of it. Knowing these reasons now, I think there is some lack of promotions in certain areas like mine. I don’t know with others. Starters can always download and try it out and read (thoroughly) the documentation. You won’t regret you had. Then, we can help by telling friends about it. Write something about it and tell the world. Contribute codes and help newcomers cope up.
As usual, this is just to share my thoughts on my favorite database. I’ve been using it in a Client-Server environment so you may have a different experience than mine. But generally speaking, you can think of the thoughts presented here.
Posted in
Databases,
db4o
|
Written on Monday, January 14, 2008 by Edwin Sanchez
Reports drive businesses to be better. Behind those humongous data are processes that will need to generate summaries, listings, graphs and more. In addition to what we know on how to supercharge our db4o database to top speed, below are the two main questions I ask myself when I am designing my application using db4o database for speed of generating reports:
Will the report be produced real-time or not?
This is a question I ask my users before I finalize the data source of the report. If this will be a periodic report (weekly, monthly, etc.), what I find the best design strategy for speed is to persist only the needed information in the particular report in one object, have an automated process run at night (or weekends depending on the requirement) to update the data. When reporting comes, my application will query the summarized data based on user criteria. This way, the user will perceive a faster generation of report based on lots of data. However, if the report is needed on demand real-time based on the latest data, that’s the only way I will need to query directly from the raw objects.
Will the volume of raw data be very large?
Upon initial design stage of a system, I try to find out if the volume of data can be very large that querying it whether native queries or SODA will still be slow and unacceptable to the user. When I perceive this to be the case, I will design my db4o database into separate files. This will increase processing and loading time since the data store is smaller. (Designing which goes to what file is something to be taken seriously, though. It can make things better or worst depending on the design) One example is when you need to generate a monthly report out of humongous raw objects. You can design separate files each based by month and year. If separate monthly files are your way to go, I have found using the following to be effective in my case:
FlushFileBuffers(false) – “Wait a minute! Isn’t this considered a dangerous practice?” Yes, it is. This is included in the Dangerous Practices in the Reference Documentation. You may not agree with me but here’s the idea why. The monthly report files are generated separately each month. So a damage one will not affect the other. Added to that, when it gets corrupted due to some factors, you can still reproduce it by triggering the data processing for the month and year concerned.
Enable Field Indexes on Key Fields – This should be clear why. This will increase performance when you retrieve your summary data later. But remember, just the key fields only. You may degrade processing performance by adding unnecessary indexes.
Disable Query Evaluation on Fields Not Used for Querying – as the documentation states, all fields are evaluated by default. Specifying which of the fields will not be evaluated will increase performance
Unicode(false) – if your report is in plain English and not using double byte characters, turn Unicode support off. This is on by default.
Query Evaluation Mode to SnapShot – after processing has taken place, it’s time for the users to view the output. Snapshot is best for the client-server setting if your memory.
Blocksize(8) – I have found following this recommended setting increases performance. Changing it to a higher value makes processing slower.
When Processing, Open or Create the File Using OpenFile. Use OpenServer to Open it later for shared use – Single-user access is still the fastest when processing data. And since the user doesn’t need the output at processing time, OpenFile will be just right. When the time comes that the users need to view the report in your application, that’s the only time this will be hosted using OpenServer.
Separating data in different files will simplify my query unlike one database for everything. And therefore will increase the performance. A more complex query will more likely to run slowly. Backing up the monthly data is also simpler in this setup. And chances of reaching the database file size limit will be low.
I have presented here my general considerations in this type of scenario which is reporting. There are other factors to consider that may only be applicable to your situation. I hope this general guide will help you.
Posted in
Databases,
db4o,
Performance,
Programming
|
Written on Wednesday, January 02, 2008 by Edwin Sanchez
Web services are around for some time and it has been supported by lots of languages and development tools, including Java and the .Net languages. In .Net, invoking web service methods is as easy as calling your own methods from a class library. There are requirements, though, before you can invoke any web service method. Just like your class library, you need to create a reference to the web service from the invoking client application. In .Net, this is a web reference. After that, you can declare any variable for your web service, like the one below:
MyWebService service = new MyWebService();
Calling it is also simple:
returnedvalue = service.MethodName();
In .Net 2.0, Microsoft has added support for the new multithreaded programming with event-based asynchronous pattern when generating the proxy code by the WSDL tool. This opens a new way of invoking web services, better and faster. And this allows the programmer to easily utilize this feature without creating complex multithreading code. In the MSDN magazine, it has been pointed out that many web sites that utilize ASP.Net are not making use of asynchronous web service calls and in the end, experience the “Server Unavailable” error when lots of processes have consumed the thread pool. In order to conserve this thread pool, you need to call web services asynchronously. In my last post, I have pointed out that this is one of the design considerations when developing applications using web services.
Implementing it is easy. You just need to add little code. The first thing to do is to add the Async attribute to the @Page directive to allow asynchronous calls:
<%@ Page Language="C#" Async="true" %>
And then, depending on your requirements, you can utilize what event you need. What I have tried is to use an event handler for the Completed event of my proxy. Below is an example:
service.MethodNameCompleted += new WebServiceProxy.MethodNameCompletedEventHandler(service_MethodNameCompleted);
The Completed is automatically generated by the WSDL tool where the methodname is the name of the method you are trying to invoke. When the method you call has completed its execution, this event will be triggered so whatever things you need to do after the web method call (such as binding data sources to controls) should be coded in this event.
Lastly, you call the Async where again, the methodname is the name of the method in the web service that you are trying to call. When this is invoked, it will return immediately, allowing you to perform further operations.
returnedvalue = service.MethodNameAsync(params);
That’s the additional code you need to make call web services asynchronously. It is not so much of a big deal to add and the advantages outweigh the disadvantages.
When is this applicable to use?
- When you perform time-consuming tasks such as database operations (like a native query or SODA returning a large object set)
- When you need to execute multiple operations simultaneously
- When you need to wait for resources without “hanging” your application.
You need to take note of this one thing: When you are still debugging your web methods, adding a breakpoint somewhere in the web service method (or a class library being invoked by the web service method) will not work, that is, the debugger will not stop in the breakpoint you specified. In order to work around on this, you can try calling the method synchronously at first. That is, calling it the usual way. When you are finish debugging, change the code to make use of asynchronous calls. Even better, a unit test can verify if your code is working or not before assembling all the components. This way, debugging will be less.
So, how about that? If your hands are getting itchy to try this out, try it now. You won’t regret you have used this feature.
---------------------------------------------------------------------------------------
Related Posts:
Posted in
ASP.Net,
C#,
Performance,
Programming,
Web Services
|