Just some quick notes about comments made recently… They are great and I can see that more clarification of my postings are in order…
(1) yes, the math is very simple, as was the comment about adding 2 - 50ms as no big deal to performance. After seeing that comment, I wanted to make sure people understood the possible impact of this – yes, it is relative to your environment, architecture, and deployment (that’s always true).
The previous blog postings refer to using virtual directory as primarily a proxy tool, meaning you keep the underlying data structure relatively intact. Then I agree that persistent cache would be an almost bizarre approach.
So, allow me to make the point I should have to begin with… Sometimes you want to represent the information in a way that it significantly different than the way it is currently stored… This means creating new views of existing data, for example, across multiple database sources and tables. This would involve multiple joins, and are costly in terms of processing. As Mark Wilcox mentioned, there are other tools available for solving these problems. Some databases support materialized hierarchical views of data to solve this problem. It is also possible with some virtual directories to solve this type of problem, but you need a persistent cache, doing this dynamically will be too slow for many applications.
(2) I never argued that it was ok to wait hours for updated information. The fact is that many organizations CURRENTLY have such a situation where updates can take hours or even a day to synchronize. I was proposing a solution that would do the same thing within 1 second…. Thanks for letting me clarify that point…
Hopefully this clarifies things a bit, and thanks for the dialog!
No comments:
Post a Comment