CoSign: Collaborative Single Sign-On  
AnnouncementsDiscussion
 

cosign-discuss at umich.edu
general discussion of cosign development and deployment
 

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Cosign in production at UoA and thanks





--On Tuesday, June 14, 2005 10:31 AM -0400 Wesley Craig <wes@xxxxxxxxx> wrote:

On 14 Jun 2005, at 10:06, David Alexander wrote:
We implemented Cosign behind a F5 BigIP load balancer.  Our short-
term solution uses NFS as a shared file store between blades.
Although we have a high availability architecture for NFS, we are
concerned with the response time of NFS under load.  In the future,
we would like to replace the file reads and writes with Oracle
database calls.

So you're not using replication at all? Can you describe how you've deployed NFS with high-availability?


We are not using Cosign's replication. We configured each Cosign blade to store cookies on the NFS mount point.


NFS is hosted on an Alpha cluster with memory channel interconnect. NFS can fail over between cluster members. We did a lot of performance testing of this NFS system because we are using it in production for other applications. The disk use profile of Cosign - lots of small reads and writes - does not cause contention on the NFS cluster. The NFS solution is a temporary kludge until we can get Cosign working with Oracle.




 
Copyright © 2002 - 2004 Regents of the University of Michigan :  Page last updated 15-December-2010