Please visit my personally hosted blog space:
It's Singleton, not Simpleton...dummy!
Friday, November 20, 2009
Please visit my personally hosted blog space:
Posted by Rem0teMeth0d at 12:18 AM
Thursday, March 26, 2009
Dear Dummy Friends,
I am back with some more trivia, just in case you have been struggling with attaching a video stream to your application, capturing images, changing the color (filtering), or manipulating the results of a video stream capture, here are some tips for you.
First things that you need to know in order to begin the video or image filtering are:
- An image or a video can be thought of as a layer of colors (red, green, blue and opacity).
- The layers can be transformed using Matrix Transformation (ordinary mathematics)
- Transparency, Brightness, Contrast and Other properties if an image or a video can be handled by applying these transformations. (This is known as filtering.)
It is highly recommended that you visit the following links to know better about the concept of image processing:
Keep tuned and get yourself ready with the understanding of color matrix.
Thursday, February 19, 2009
I have just returned from the Second day of Sun Tech Days 2009, Hyderabad. In the last few hours the first thing I decided to do was to look up Project Kenai, a platform for collaborative, open source product development with a host of Social Networking features and Web 2.0 style website that looks cool and feels cool too (and I am not referring to the color blue that is the theme color for the website). There are various things that might not appear quite user friendly to a Useability expert but for a developer who is trying to build a product in collaboration with the whole community of java developers the features are just about right to get started.
I have registered on the website and have been granted the permission to host my own project called Drishta. I am going to talk more about my project and the idea behind it in the following posts, but for now what I am very excited about is that I have finally decided to reignite this initiative that has been lying dormant in my TODO list for nearly a decade.
I have liked the idea of Kenai so far. It is simple. Straightforward. And easy to use. I would recommend giving it a fair try and making your own impression of this useful platform.
Keep tuned for Drishta!!!
Friday, January 30, 2009
Sometimes, a picture is indeed worth a thousand words. Especially I always felt that the Design Patterns book was quite verbose and distracting in its approach. It would have been great if the examples had been a little more simplified and did not clutter the space. Also, a bit of comparative study of these patterns could allow one to appreciate the subtle nuances and the differences between these patterns, which look quite alike to a newbie.
Finally, I struck gold when I found this link. Some old site that contains the pictorial or PepperSeed images showcasing the Design Patterns. The notations are quite similar to UML notations. And There is astoundingly no text, whatsoever!
I find this a very essential 5 minute referesher that the Architects should keep handy for reference, whenever a moment of uncertainty, while designing a complex application, renders them actionless.
This is the link to the site. Gang of Four Design Patterns
Here is a sample of Adapter Pattern and Bridge Pattern, when placed side by side once can see the subtle difference so clearly, and without any textual clutter around it.
I hope you appreciate the essence of text less GoF Design Patterns!
P.S. The images were stolen from Gang of Four Design Patterns.
Monday, November 24, 2008
This is a classic dilemma. What is a better model for browser based clients and back-end services to interact? Should there be as little dialogue as possible with maximum data transfer or as much dialogue as possible with minimum data transfer in each exchange of request-response. The problem is not as simple to solve as the problem statement makes it sound.
There are various parameters that need to be considered for answering this question. And the solution varies as per the requirements.
Let's start with considering a revolutionary example: Gmail!
As we are aware, Google employed the AJAX philosophy and created the first ever AJAX based email application that changed the paradigm of web mails, by increasing the performance of reading, composing and sending emails over the internet. The browsers were same as before, the bandwidth was the same but the application architecture had changed drastically, making more possible with the same set of resources.
However, it was not just about using a new technology but identifying the appropriate problem that this technology could solve. In an email type web application, a user generally performs units of tasks with every click.
- Open a mail for reading,
- Compose a mail/reply,
- Add / Remove the attachments and
- Send the mail.
If at all there is a modification required to the content of an email that was already sent/delivered, one needs to re-send the mail with the modifications. In this scenario, fetching each mail's content as and when the header of the mail is clicked makes sense, and doing it asynchronously (i.e. without refreshing the whole page) makes the usability far more intuitive and elegant.
On the other hand, is an application that shows data that is highly likely to be modified at the same time when a user is viewing the information, such a model of conversation may not really work.
Say, you have an application that displays data in the grid where rows and columns are collapsible and contain levels of information (something like a hierarchical data set). See the example.
The grid's cells don't enjoy absolute independence from the other rows and columns of data surrounding it. In other words, each cell of information is not just an atomic information in itself, but also forms a part of the information that is at a level higher than it. For instance, Physical Supply of 100 could mean there is a PO of 50, POK of 30 and TO of 20, which are all displayed as the child levels of Physical Supply. Now, in case there was a new PO of 20 created by some user, while the data mentioned above is on display in the SCP, by showing an additional 20 for PO i.e. PO of 70 would not be enough to represent the actual state of Supply Chain because, the sum of quantities for different Documents (70+30+20 = 120) would be incosistent with the total being shown for the Physical Supply (50+30+20 = 100).
With this problem at hand the requirement would be to update all the cells in the row and column that contain the modified cell, as part of calculating the summary for that row and column respectively. This requirement in turn requires us to keep a track of all the cells in the grid because the probability of any cell getting modified at any given time is almost equal, therefore any row and column could be required to be refreshed with the latest data.
Essentially, each cell update leads to an update of almost all the cells in that section (sections here refer to the parent rows viz. Demand, Suppply, Recommendations etc.) We could therefore limit the updates of a cell leading to the update of just the section in which it lies. This in turn would lead to a different kind of inconsistency. For instnace, a Demand of 30 and a Supply or 20 leads to a Recommendation of 10 and a Projected Inventory of -10. This means that even the sections within the SCP grid are interdependent therefore any cell that has to show updated data, would require all the other cells in the grid to update the data, directly or indirectly.
So, it is pretty clear that the data in the entire grid has to be treated as a whole in order to view a consistent information in the Supply Chain Profile at any given time. The rendering of data, however, doesn't need to happen in a single shot because like viewing a section or a particular row or column or a cell in particular, is quite similar to checking one's inbox for a specific email. And once that email is located, all one cares about is reading the information within. Similarly, a user will only follow a single path for drilling down information about the SCP at any given time. This in turn leads to a simple requirement of rendering data (cells, rows, columns) only for the particular paths.
The conclusion or the solution for our problem is to fetch the entire data in a Single Hit to the server, and then use selective and efficient rendering of the data based on where user clicks on the UI. We can and have tried to make the cell specific information (which is shown in the pop-ups) asynchronous, but that leads to holding the state of the data object that was transported to the client earlier. This leads to additional memory and state management requirements on the back end. Therefore, we chose to send the information in a bulk to the front end. This might be slightly slower but it represents the state of SCP accurately and consistently always and everytime.