Monday, March 30, 2009

missmiis

,,

Miss Miis (missmiis - clever huh?) made some comments I wanted to bring attention to in a recent blog posting.  She keeps up on ILM/MIIS happenings and her blog postings are useful to people using these tools.  Recently she was at the TEC conference in Las Vegas and got a chance to learn more about Virtual Directories from Todd Clayton.  She makes a clear and easy to understand explanation of virtual directories; to deliver identity data in the way applications want to see it.  This abstracts the complexity of data silos from applications, making it easier to manage your data, without disrupting your environment.

She properly compares the ILM approach to using Virtual Directory as to synch data around vs. keeping data where it is and presenting the distributed data as a single LDAP source.  She seemed very excited and wanting to learn more about this technology.  The use cases for virtual directory are many, and I believe they will continue to expand as people become more aware of the capabilities.

Check out her post and take a look around her blog at the same time.


Wednesday, February 18, 2009

why cache comments...

Just some quick notes about comments made recently… They are great and I can see that more clarification of my postings are in order…

(1) yes, the math is very simple, as was the comment about adding 2 - 50ms as no big deal to performance. After seeing that comment, I wanted to make sure people understood the possible impact of this – yes, it is relative to your environment, architecture, and deployment (that’s always true). 

The previous blog postings refer to using virtual directory as primarily a proxy tool, meaning you keep the underlying data structure relatively intact. Then I agree that persistent cache would be an almost bizarre approach. 

So, allow me to make the point I should have to begin with… Sometimes you want to represent the information in a way that it significantly different than the way it is currently stored…  This means creating new views of existing data, for example, across multiple database sources and tables. This would involve multiple joins, and are costly in terms of processing.  As Mark Wilcox mentioned, there are other tools available for solving these problems. Some databases support materialized hierarchical views of data to solve this problem. It is also possible with some virtual directories to solve this type of problem, but you need a persistent cache, doing this dynamically will be too slow for many applications.

(2) I never argued that it was ok to wait hours for updated information.  The fact is that many organizations CURRENTLY have such a situation where updates can take hours or even a day to synchronize. I was proposing a solution that would do the same thing within 1 second…. Thanks for letting me clarify that point…   

Hopefully this clarifies things a bit, and thanks for the dialog!

Thursday, February 12, 2009

why cache and virtual directories???

I know I have mentioned this before, but since there’s an ongoing ping-pong match about cache — particularly “persistent cache” — playing out on the identity blogs lately, I thought I’d return the volley. Ashraf Motiwala covered the topic again in his blog yesterday, so here are my two cents.

First off, while it’s true there are times when cache makes no sense, there are other times when it really does.  Cache is used everywhere in your PC, software, servers, EVERYWHERE —so arguing against cache seems completely strange to me.

Second, I always find it funny to hear the arguments against having more options. Why argue against choice and options? I can cite many projects that have been deployed with caching using virtual directories, and yes, this includes "persistent" cache. 

Third, cache is necessary because when merge multiple tables (join) across different databases (or directories for that matter), the results are just not fast enough for any type of security application. Anyone familiar with databases should understand this quickly. Once you join several objects or tables, the response rate of the source is dramatically reduced. The joins necessary to create views are sometimes too complex to do on the fly for most directory-enabled applications, such as would be common for IdM/security. This, in my mind, is a key functionality of virtual directories after aggregating sources for a common protocol.

Fourth, 2 to 5 milliseconds can be a big deal, and cache is essential to eliminating that lag. Think about it, if I have to search for a member in a directory, and then search a database table for additional attributes to join to this object — do you really think it will perform at close to the same speed? And that is with just two sources...imagine the performance hit you’d take by adding additional sources and multiple join operators.

When your directory is expected to perform at 8000 queries a second, adding 2 to 5 milliseconds can be a VERY big deal. OK, let's keep the math simple and take a closer look what the problem is…

  • I have a directory that performs at 5000 q/sec
  • That translates into .2 milliseconds per query (5000/1)
  • Your "overhead" is 2 milliseconds (the best performance cited)
  • My queries now take 2.2 milliseconds (11 times slower)
  • Now instead of 5000 q/sec when I access my directory I get only 455 q/sec

Some people might argue this is not a "minimal" performance hit. There are many initiatives where this type of speed would be totally unacceptable. This is actually a perfect case where persistent cache would be helpful. A persistent cache could easily bring this query rate back up to 5000 q/sec (or higher), even in the case of more complex operations such as more than two sources and more than one join. 

Fifth, the idea that you compromise the "freshness" of data for the sake of speed misses the point about what sort of information we’re dealing with here. We’re dealing mostly with identity using directories (people and other objects), and the identities themselves do not change very often in comparison to other data, such as transactions where updates and write operations are more common than search/query. For example, in your bank account, your “identity” information (name, address, phone, pin number, passwords, etc) changes far less your balance and activity.

The idea is correct, cache will create a lag in updates being available to client applications, BUT virtual directory implementations using persistent cache with event-detection cache-refresh mechanisms, offer (near) real-time incremental updates of information. 

Furthermore, if an account is disabled, and it takes 1 second to update that account to all cache instances this would be an improvement. Currently many organizations currently are taking several minutes or even a full 24 hours between this type of updating. 

Well, those are my 5 cents worth of comments... I know I promised only 2, but who's counting pennies, just milliseconds, right? :)

Oh, btw, no one has mentioned the distributed remote persistent cache story, which I have seen implemented with virtual directories — now we’re talking about some serious advantages... If anyone is interested, I would participate in such a discussion... 

Metadata and Integration

Wow, nice to see someone articulate the problem. Too many vendors and architects use too diverse of a language for solutions to converge right now... David puts the finger on the nose here... We have to start understanding the metadata and semantic relationships (esp for security) in our systems, in a way that is scalable..

Check out David Linthicum's posting...

 


Monday, October 27, 2008

Identity Management 2.0 ?

Darren Calman posting something that has attracted some attention from Matt Pollicove and from Ping.  I think it is worth paying attention to. You will find some good historical and pragmatic information, but of particular interest to me was what was stated about identity virtualization.

Virtual directories are being touted as a IdM 2.0 by Matt Pollicove, as a "identity virtualization" service.  Virtual Directories have been around for almost a decade now (8-9 years).  Perhaps their value is being increasingly realized.  The need for directories is not going away soon, their secure hierarchical structures give context and the high-performance needed for such things as web portals, which could be servicing millions of users.

I believe that the need for this type of value will continue long term, even if we see a decline in the use of directories and LDAP  There persists a real need for applications to understand the semantic relationships that are not easily represented in more traditional, and more prevalent RDBS AND in security or externally facing applications (which user volume is typically higher) a real need for high-performance and availability that can not be easily achieved through other database systems.  Using a thin virtualization layer on top of other databases to provide the functionality of directory for security, but not the primary storage of data, is an interesting evolutional possibility in the identity space. Security, search, and querry could still be serviced through this layer, but insert, update, delete operations could be handled by an RDBS, maximizing effeciencies.

The virtual directory offers this "virtualization of identity", solution, where relational tables can be presented as hierarchical views, accessible as LDAP or other protocols as needed. To acheive this value your "identity virtualization" service must offer:

  1. data modeling (as to not constrain you to the existing structures if a different view is needed, but still maintain the existing relationships),
  2. the ability to maintain high performance (because if back-end sources are NOT primarily LDAP, there are complex joins, or cross-application searches needed for authorization, performance will not be high enough),
  3. a choice in deployment (proxy AND data model; dynamic AND event-driven update of an instantiated model (materialized hierarchical views) and
  4. choice of protocols (not only LDAP, but a object oriented system that is more agnostic about delivery, at minimum you should get LDAP, SQL, and web services)

Perhaps we are about to pass the "hype" stage of identity management, but that means it is now is time for the heavy lifting to begin, the work of deployment and implementation.  Is your infrastructure ready?

 

Friday, September 12, 2008

Burton Id Correlation Stressed

Burton is hosting a telebriefing about identity correlation.  I have to say I'm a bit surprised, but think it's great.  (see blog posting by Ian Glazer here)

It is true, if you want to have true access management you have to start with being able to correlate accounts. Without this you have no way to even know who has access to what, much less be able to audit, control, or monitor it in any way.

Correlation enables "Access Certification, Role Mining, Entitlements Management, Policy Evaluation, Identity Auditing, and numerous other custom services developed by our customers... password management and user provisioning.  The reality is the correlating of accounts to people is a requirement for all identity management exercises.  "

How true it is.  You have to establish a mathematical union of identities.  Think back to basic set theory, you have to establish a record where each user is represented exactly and only once. Some would call it "global key mapping", others "correlation", or even "account linking".  Whatever you call it, the idea can still be reduced to the old concept of creating a union of two or more sets. 

Here is another great point, "Here's a tip to enterprises out there - ask your software vendors and deployment teams what capabilities they have to help facilitate this correlation.  Ask early and before you start down the path of an identity project.  Make it an on-going process governed by your overall identity management program."

TRUE!  Do this planning early.  I was talking with several integrators this past week at DIDW in Anaheim, and its amazing how much work you still have to put into convincing people not to just buy an IAM suite, but rather to solve some integration problems (i.e. correlation) Seems like I never stop talking about this, but glad to see Burton is taking up the mantra also!  :)   If you can, join the teleconference on October 1st and 2nd.

 


Tuesday, July 29, 2008

The Safari of Identity

I thought I would have a bit of fun and continue my analogy about the jungle we often find ourselves in when dealing with the sticky issues (and often unknown) of identity and identity management.

African Safari's were plagued by slave trading until the British put a stop to it in 1896, in the shortest battle in history (38 minutes).  The past for identity has been similar, you were forced into solutions, sometimes without your consent - you didn't really know what was happening or that you were even solving what we call an "identity" problem now.  You were just trying to get authentication services to work perhaps.  You saw a problem and solved it.  Now you have choices, you are free to chose and the options are a bit more clear, but there is still a jungle, and navigating it can be difficult.

So, now you have choices, but just like going on Safari, never go without someone who has been there, just as Stanley and Livingstone never went without their native guides, the most famous of which were Chuma and Susi. These two were the ones who were responsible for carrying Livingstone's body thousands of miles so it could be transported back to England for a proper burial at Westminster Abby in 1874.  There are "guides" that don't specialize in much, they just "know everything".  Identity is unique and there are unique issues, you can't trust just any "data" expert.

So, first lesson with Identity Safari - don't' be a slave to the biggest company or solution, be creative and look around.  Second make sure your choice of products and consultants specialize in identity from the ground up.  Not just to add to their "global suite" of "stuff" they can sell you.  They really don't know what or why behind these solutions, such as virtual directories. 

I noticed an interesting short post from James McGovern in which he states:

"Mark Wilcox, Nishant Kaushik and others have discussed the virtues of virtual directories but have at some level done themselves a disservice by describing them as glorified proxies."

VERY TRUE! Here is an example of what I see as a failure to know your jungle. Do you need more than proxy?  OF COURSE!  But if you see virtual directories as nothing more than a glorified proxy service you are limiting your view of the jungle.  When your solution revolves around old ideas and paradigms you will have the same old results and complexity of the past.

The solution at this vantage point is that you need more help, more products, more data stores, complex synchronization, and then start searching for SOA solutions and new ideas of architecture they will recommend you adopt.  WAIT! STOP! Don't burn down the jungle just because you have a poor guide.  It's really not that difficult when you have the right information in hand, with tools prepared for now and what will come. 

I was going to post more about this misunderstand in the virtual directory market, but here is a good newsletter article from Oct 2007 that explains it well.  Check it out

 

Friday, July 18, 2008

Lost in the Jungle?

To add more thoughts and explanation to my last posting, "Are we lost?"

The idea is to not cut through the jungle every time we have a new initiative  At least solve the integration problem, once.  Don't reinvent the wheel every time you have a new application or new initiative.  Don't replicate data everywhere, employ complex synchronizations, and in general make everything 10X more complicated, compounded with each new deployment. 

Build one platform to plug everything into.  Sounds too good to be true?  Yeah, maybe it's not "everything" yet, but almost.  AND I believe this technology will evolve further and will become even more useful. 

This doesn't address the plethora of applications and methods available in the IdM market, but it does give you the opportunity to not care.  You can integrate any application you want, use any protocol, any schema, any data structure, any security means, any authorization/authentication schemes you want, as many as you want.  That is the power of virtualization. 

For me, within the virtualization platform you need proxy, data modeling, synchronization, service bus, and the ability to do build a correlation index (key mapping) if needed.  If you have these things within a virtualization "platform", then you will be ready to face the future.  You know where you can start, at least your identity stores are integrated into a common platform.

If you need LDAP, you have it. If you have the need for web services, use it. If you need to concatenate, transform, and otherwise alter objects, you have the tools.  If you need to query data, you can, if you need to push / synchronization data, you can.

The beauty is you do this work once; integrate your sources, then you just need to build a virtual object (e.g. in XML) to meet the needs of a new application or initiative and how you want to access it.  This is where you save time, money, and the headaches. 

You might still get lost in the IdM jungle, but at least you have a point of reference, a stronghold in the jungle.  A starting point instead of starting from scratch each time. This means you are free to choose a solution or application based on the business needs and its own merits, not your environment.  Otherwise you get lost in trying to calculate integration costs, deployment times, feasibility studies, and so on.

Solve what we can today, let others worry about the future. "Tomorrow has enough worries of its own." Simplify your workload, use virtualization to solve the problem of identity integration, don't stay lost in the jungle.


Thursday, July 17, 2008

Are we lost?

There is a posting of interest recently by Pamela Dingle, you can check it out here.  I love to see people talk from their gut and talk about real issues, without the marketing guff and fluff, just real-life observations. Her recent posting "Catalyst Epiphany 2 - We're a little lost" points out the fact that identity management is all over the map.  I agree, sometimes I feel like companies are throwing solutions as fast as they can and hope that something sticks. Identity solutions have become a type of jungle, one that you never know exactly what to expect when you get inside.  Yes, there are explorers who have been there, and have reported back to us what strange and deadly creatures you will encounter on your own journey.  But where are the roads, the standards?

When my personal life gets like this I know it is time to slow down and take inventory, examine the issues. Perhaps this is where we need to start, to slow down and start to examine the underlying issues and what the real needs are in Identity and Access Management.  I have been to this crossroad, and I think this is why I believe so strongly in the identity virtualization platform (which virtual directory is a part of, and what started this concept).  It has given me an anchor point to start from; tools in my pocket like a swiss army knife to address the multitude of issues that plaque identity integration.

 

Tuesday, July 15, 2008

Active Directory Reality Tour Travelblog: James' unanswered questions...

Jackson's Identity Management & Active Directory Reality Tour Travelblog: James' unanswered questions...

As most of you know, I try to keep up on some blogs out there, one of which is Jackson Shaw's.  I wanted to throw in my two cents on a couple responses he made yesterday to another blog's posting (James McGovern). 

JAMES: If pretty much every Fortune 500 enterprise (acknowledging that Sun is the standout oddball) has Active Directory, why should any of them consider yet another product? Why shouldn't they simply wait for Microsoft to include virtualization support in Active Directory? Please no responses that are "tactical" in nature nor attacking Microsoft because they never get it right the first time.

JACKSON: I agree. Why should they? It's obvious - at least to me and I am pretty sure a lot of other folks out there - that identity management has evolved beyond directory synchronization and metadirectory to include the concept of virtual directories and all of what that means. Is there ever an ideal product? Sure, for a period in time. But times change. If Microsoft isn't thinking about how to solve this problem I would be surprised - oh, and let's not forget that a possible solution is to do nothing.

Yes, Microsoft already has several options to managing the problems solved by virtual directories, if you are a primarily Microsoft shop (if someone really wants to know, let me know and I will get you a list).

They (Microsoft) are interesting in selling more products, and keeping you "in the fold", virtual directories tend to give you more freedom to leave the sheep pen.  If you are a Microsoft shop, then their products integrate relatively well with their other products.  I'm not as magnanimous in my feelings that Microsoft is particularly interested in the problems that are primarily encountered in a non-Microsoft centered architecture as Jackson. 

Also, this idea of using AD for a central identity store, umm not so fast.  There are a couple issues, yes almost every enterprise had AD, but that doesn't mean it is their central identity store.  (1) AD stores lots of information, primarily related to its original design of being a NOC directory.  Moving all your needed identity information into AD is just a bad idea in the minds of a lot of architects who don't want to "tinker" with AD operations.  I would agree with their position here. (2) Also, it is not offer high enough performance for many deployments, so then you replicate instances, and things get more complex.  Say a SunOne Directory Server is performing 5K-6K quires per second (qps) on the same hardware you could see AD performing only 2K-3K qps or less.  I've seen AD doing less than 900qps.  (3) Then you have the issues of non-standard LDAP operations, there is a rather long list of things I have compiled over the past year that AD is just not LDAP v3 compliant with, not the least of which is most applications want to see "inetOrgPerson" objectclass, not "user". These little things can hang up projects for longer than you might think. Yes, again Microsoft has "work-arounds" but if you aren't committed to being a Microsoft customer no matter what, why would you try to go this route? 

JAMES: The ideal situation says that a software company should be able to write an directory enabled application without requiring virtual directories but reality is a little different. Wouldn't the thought leaders in this space without resorting to tactical responses agree that instead of pushing products/tools in this space we have to help others understand the <> so that this problem goes away? Even products by companies with really smart individuals such as EMC still get this wrong. Does Oracle have any thoughts on helping people avoid virtual directory by writing better directory-enabled applications or is it better to bury one's head in the sand, ignore the problem of others and simply respond with point solutions.

JACKSON: James, you are right. Software companies need to do a better job when they write directory enabled applications but it is a long road. The average application developer still needs to do a better job managing identity, authentication and authorization.We still have a long way to go unfortunately.

Sure, in the ideal world... well maybe... This exchange of ideas makes more sense if virtual directories is only viewed as an LDAP proxy, a way to aggregate multiple directories, centralizing data sources.  They do more than that, they

  • integrate applications, databases, web services, and directories,
  • disambiguate same-user accounts to integrate identity information (even establish previously non-existing key mapping),
  • overcome performance issues of underlying sources,
  • "push" information for provisioning
  • use an ESB for messaging / synchronization,
  • data model new views of data,

and the list goes on-- you face one or more of these things with each new deployment.

I prefer the idea of having a central access point for identity, and just creating a new view based on my new requirements rather than configuring applications over and over, duplicating my efforts.  I do the work once and my applications have one place to look for the information (for authz and authn).  It saves A LOT of time. Even if you do have an application that supports directory services very well such as CA's SiteMinder (which offers a lot of functionality to connect multiple sources), but still you configure all this and it is only consumed by SiteMinder.  I've lost all the effort I've put into this when I have another deployment that needs to consume the same or similar identity profiles. There really is a lot of duplication of work here for different applications. 

Which is quicker?  Configuring each application's virtual-directory-like component, or just creating a new view in a virtual directory to be consumed by a new application?

I would be interested to hear with others think....

More Complex Identity Integration

I have had some questions recently about what I mean by comments about "more complex integration".  I have had a few people ask me through my blog and recently at Catalyst in San Diego, and so perhaps I should clarify.  What IS my point of reference to "more" complex? 

Quoted from http://identityinfrastructure.blogspot.com/2008/02/oracle-virtual-directory-webinar.html:

Identity Data Delivery: Oracle Virtual Directory Webinar

... offer solutions to more complex integration problems that you can face that require more feature sets...

My point of reference was in relation to some virtual directory products which are basically LDAP proxy's.  It is not always apparent the issues you will face when planning an identity deployment.  So here are a few examples of what I would consider out of the scope of most LDAP proxy services.

1) integration between profiles where a common key is not present

when a common key is not present in all data sources, this requires correlation and complex matching rules to be used to disambiguate users across systems, for performance you need to do this off-line processing (this is something many virtual directories would try to do dynamically, in real-time for each query, if at all). 

2) Projects that could benefit from "push" not only "pull" technology.

Sometimes to achieve identity integration, you need to provide for only only "pull" (search/query) but also "push" (synchronization) technology (needed for things such as provisioning). Being able to leverage ESB technology for example has huge benefits to being adaptable to future needs. 

3) Deployments that require a different tree structure than what exists in current identity sources.

For some projects, the integration of the sources requires a tree structure that does not match any of the existing DIT's.  It is necessary to have the ability to design a new DIT based on existing information, but in a different structure. 

4) Deployments that require, or could benefit from information that lives in legacy applications, accessible only via web service, and/or one or more databases in which the data you need is located in multiple linked tables (complex relationships).

This requires a preservation of the context of the objects and the ability to build a modeled view of those objects based on the requirements of the initiative.  This is again out of the scope of an LDAP proxy service that is offered by most virtual directory vendors.

Also, think long-term.  Often we do not care enough about other feature sets because they don't see the need today.  Integration issues tend to get more complex, not less complex over time. Just because you are not facing these types of issues now doesn't mean that you won't the future. More times than not we avoid using certain identity information only because of the complexities in doing so.  There are certainly more options for identity integration than those of LDAP proxy capabilities offered by some virtual directories.

Wednesday, July 9, 2008

The "Virtual Storm"

Too bad I didn't see this before my last post!  This is great, and actually humorous, especially if you have been following any or all of the virtual / meta / & "my stuff is better than your stuff" discussions since March '08.  Jackson started it! (can you have to hear the child-like finger pointing in my voice)

But seriously, I think this interest and effort by people trying to make sense out of the tools and methods available and solve real problems of identity in the enterprise should tell us something.  The problems still exist!  I love debate, that means people are thinking about it. Progress is at hand :)  

SO - go read this post!  :) 

Cache, Enterprise Service Bus, Virtual Directories

I was trying to catch up on some of the discussions from the past month or so. Check this posting from Mark Wilcox.  Mark does a good job of being a peace-maker of sorts, trying to organize thoughts from different bloggers such as Dave Kearns and Clayton Donley.  There is definitely confusion around what each author is talking about.

It is my position that you need not only what Oracle and others would define as a virtual directory, but that service should include the other tools necessary for a complete solution; cache, ESB (enterprise service bus) / Messaging, LDAP proxy, data modeling, key mapping/correlation, web service, etc.  This is what identity virtualization is all about. You have different problems, you need different solutions (see what Jeff Bohren says). Virtualization allows you to use your existing data stores and use it the way you need it. You should have LDAP, SQL, web service capabilities - not limited to a single protocol.  You need push and pull of data (dynamic access and message bus) and performance guarantees (cache).

BTW it is funny to watch some of these guys fight against cache.  WHY?  We use cache everywhere for performance reasons.  No one argues that CPU's aren't fast enough as evidenced by the presence of cache!?! Or your that the internet needs to be "fixed" because your web browser has cache. Cache is everywhere, there are real applications for it, people use it, people need it.  So, why the anti-cache rhetoric when it is talked about in terms of Virtual Directories?  It is a mystery.... most all virtual directories have at least one form of caching option... 

It has always been frustrating to me to see large companies bully customers into accepting their view of the world.  So, what I mean in this example is that some large vendors will tell you that if their virtual directory isn't fast enough for you, then you have a problem with your sources, it is not their problem. They will be glad to sell you more of their "stuff" to fix "your" problems.  Well, I prefer to keep my lunch money than to give to bullies.  :)   Think about it....

Wednesday, July 2, 2008

SaaS IdM Management - Flynn

Matt Flynn has an interesting post about what he calls SaaS-ish identity management.  The idea is to outsource your IdM managment, the dicussion is primarily about the objections to this idea within an organization. I can see how this can be useful, especially to smaller organizations who can not afford project teams, expensive IdM management packages, and the ongoing expense of maintaining these systems.  This is certainly down-the-road, but worth taking note - this could certainly become reality. 

There are certainly segments of the IdM market ready to see commodization, and this could be one more way of doing just that. 


Monday, June 30, 2008

New VDS Releases

There have been a couple of announcements in the Virtual Directory World.  I would be remise if I did not make mention of them.

Radiant Logic announced (Jun 23) the release of VDS 5.0. Here are some of links from the announcement. Press Release, Government Computer News, DM Review, Campus Technology

The new VDS offers simplified use of virtual directory technology, turning complicated LDAP operations into simple point and click tools. VDS 5.0 has built in wizards and templates to make the most common deployments even quicker and easier.  The new admin console boasts new scenario-driven wizards, so that users can easily deploy various identity and access management tasks.  Scenarios include web access management, Active Directory object and attribute mapping to SunOne (or other LDAP v3 directory server), and Active Directory forest aggregation.  There has also been support added for role-based delegated administration

They also added a very cool new Web-based remote admin console.  If you get a chance, check it out.  It was built on Abode's new Flex technology and has all the features of the server-based console.

As LDAP ages it is important to see that vendors are automating more of the LDAP functions. The need for hierarchal structures still persists, and LDAP is a perfect fit, applications consuming information from LDAP based repositories are not going away anytime soon. I have already heard this new version of RadiantOne VDS referred to as the "VDS for the rest of us". 

Optimal IdM also announced the release of their new Virtual Directory offering today (June 30th), the Virtual Identity Server for Enterprise Group Management.  This is a specialized release of their VD based on .NET to help with the known cumbersome task of AD group management. 

Their close partnership with Microsoft shows in their product.  Not only it is .NET based, they address AD issues with sharp precision.  Certainly Microsoft centered shops should take notice. I also read a good posting about this release on Jeff Bohren's blog. Jeff relates these new tools that are emerging from the virtual directory vendors as specialized "arrows" designed for use for specific problems.  I could not agree more.  You need a "quiver" of tools at your disposal to solve the unique and changing requirements of your identity environment.  Having a flexible identity infrastructure that virtual directories provide, is in my opinion essential.