Ehcache v3 Ticket Registry
Ehcache 3.x integration is enabled by including the following dependency in the WAR overlay:
1
2
3
4
5
<dependency>
<groupId>org.apereo.cas</groupId>
<artifactId>cas-server-support-ehcache3-ticket-registry</artifactId>
<version>${cas.version}</version>
</dependency>
1
implementation "org.apereo.cas:cas-server-support-ehcache3-ticket-registry:${project.'cas.version'}"
1
2
3
4
5
6
7
8
9
dependencyManagement {
imports {
mavenBom "org.apereo.cas:cas-server-support-bom:${project.'cas.version'}"
}
}
dependencies {
implementation "org.apereo.cas:cas-server-support-ehcache3-ticket-registry"
}
This registry stores tickets using the Ehcache 3.x caching library and an optional Terracotta cluster.
Actuator Endpoints
The following endpoints are provided:
In-memory store with disk persistence
Ehcache 3.x doesn’t support distributing caching without Terracotta so using it without pointing at a Terracotta server or cluster doesn’t support using more than one CAS server at a time. The location and size of the disk caches can be configured using the root-directory and per-cache-size-on-disk properties. If the persist-on-disk property is set to true then the caches will survive a restart.
Terracotta Clustering
By pointing this Ehcache module at a Terracotta server then multiple CAS servers can share tickets. CAS uses autocreate
to create the Terracotta cluster configuration. An easy way to run a Terracotta server is to use the docker container.
1
2
3
4
5
6
docker run --rm --name tc-server -p 9410:9410 -d \
--env OFFHEAP_RESOURCE1_NAME=main \
--env OFFHEAP_RESOURCE2_NAME=extra \
--env OFFHEAP_RESOURCE1_SIZE=256 \
--env OFFHEAP_RESOURCE2_SIZE=16 \
terracotta/terracotta-server-oss:5.6.4
Running a Terracotta cluster on Kubernetes can be done easily using the Terracotta helm chart.
Configuration
The following settings and properties are available from the CAS configuration catalog:
Configuration Metadata
The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.
Be Selective
This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.
YAGNI
Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.
Naming Convention
Property names can be specified in very relaxed terms. For instance cas.someProperty
, cas.some-property
, cas.some_property
are all valid names. While all
forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where
this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those
that might be presented to the system via an external library or framework such as Spring Boot, etc.
When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value
.
The only possible exception to this rule is when naming actuator endpoints; The name of the
actuator endpoints (i.e. ssoSessions
) MUST remain in camelCase mode.
Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas
. All other settings are controlled and provided
to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with
the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition
or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the
way you intend.
Validation
Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be
recognized or validated by the configuration schema. The validation process is on by default and can be skipped on startup using a special system
property SKIP_CONFIG_VALIDATION
that should be set to true
. Additional validation processes are also handled
via Configuration Metadata and property migrations applied automatically on
startup by Spring Boot and family.
Indexed Settings
CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value
. The index [0]
is meant to be
incremented by the adopter to allow for distinct multiple configuration blocks.
Eviction Policy
Ehcache can be configured as “eternal” in which case CAS’s regular cleaning process will remove expired tickets. If the eternal property is set to false then storage timeouts will be set based on the metadata for the individual caches.
Ehcache v2 Ticket Registry
Due to the relatively unsupported status of the Ehcache 2.x code base, this module is deprecated and will likely be removed in a future CAS release. Unlike the Ehcache 3.x library, it can replicate directly between CAS servers without needing an external cache cluster (e.g. Terracotta in Ehcache 3.x).
This feature is deprecated and is scheduled to be removed in the future. If you can, consider using the Ehcache v3 ticket registry functionality in CAS to handle this integration.
Ehcache integration is enabled by including the following dependency in the WAR overlay:
1
2
3
4
5
<dependency>
<groupId>org.apereo.cas</groupId>
<artifactId>cas-server-support-ehcache-ticket-registry</artifactId>
<version>${cas.version}</version>
</dependency>
1
implementation "org.apereo.cas:cas-server-support-ehcache-ticket-registry:${project.'cas.version'}"
1
2
3
4
5
6
7
8
9
dependencyManagement {
imports {
mavenBom "org.apereo.cas:cas-server-support-bom:${project.'cas.version'}"
}
}
dependencies {
implementation "org.apereo.cas:cas-server-support-ehcache-ticket-registry"
}
This registry stores tickets using Ehcache version 2.x library.
Distributed Cache
Distributed caches are recommended for HA architectures since they offer fault tolerance in the ticket storage subsystem. A single cache instance is created to house all types of tickets, and is synchronously replicated across the cluster of nodes that are defined in the configuration.
RMI Replication
Ehcache supports RMI replication for distributed caches composed of two or more nodes. To learn more about RMI replication with Ehcache, see this resource.
Configuration
The following settings and properties are available from the CAS configuration catalog:
Configuration Metadata
The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.
Be Selective
This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.
YAGNI
Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.
Naming Convention
Property names can be specified in very relaxed terms. For instance cas.someProperty
, cas.some-property
, cas.some_property
are all valid names. While all
forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where
this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those
that might be presented to the system via an external library or framework such as Spring Boot, etc.
When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value
.
The only possible exception to this rule is when naming actuator endpoints; The name of the
actuator endpoints (i.e. ssoSessions
) MUST remain in camelCase mode.
Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas
. All other settings are controlled and provided
to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with
the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition
or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the
way you intend.
Validation
Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be
recognized or validated by the configuration schema. The validation process is on by default and can be skipped on startup using a special system
property SKIP_CONFIG_VALIDATION
that should be set to true
. Additional validation processes are also handled
via Configuration Metadata and property migrations applied automatically on
startup by Spring Boot and family.
Indexed Settings
CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value
. The index [0]
is meant to be
incremented by the adopter to allow for distinct multiple configuration blocks.
The Ehcache configuration for ehcache-replicated.xml
mentioned in the config follows.
Note that ${ehcache.otherServer}
would be replaced by a system property: -Dehcache.otherserver=cas2
.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<ehcache name="ehCacheTicketRegistryCache"
updateCheck="false"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd">
<diskStore path="java.io.tmpdir/cas"/>
<!-- Automatic Peer Discovery
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4446, timeToLive=32"
propertySeparator="," />
-->
<!-- Manual Peer Discovery -->
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=manual,rmiUrls=//${ehcache.otherServer}:41001/proxyGrantingTicketsCache| \
//${ehcache.otherServer}:41001/ticketGrantingTicketsCache|//${ehcache.otherServer}:41001/proxyTicketsCache| \
//${ehcache.otherServer}:41001/oauthCodesCache|//${ehcache.otherServer}:41001/samlArtifactsCache| \
//${ehcache.otherServer}:41001/oauthDeviceUserCodesCache|//${ehcache.otherServer}:41001/samlAttributeQueryCache| \
//${ehcache.otherServer}:41001/oauthAccessTokensCache|//${ehcache.otherServer}:41001/serviceTicketsCache| \
//${ehcache.otherServer}:41001/oauthRefreshTokensCache|//${ehcache.otherServer}:41001/transientSessionTicketsCache| \
//${ehcache.otherServer}:41001/oauthDeviceTokensCache" />
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="port=41001,remoteObjectPort=41002" />
</ehcache>
Eviction Policy
Ehcache manages the internal eviction policy of cached objects via the idle and alive settings. These settings control the general policy of the cache that is used to store various ticket types. In general, you need to ensure the cache is alive long enough to support the individual expiration policy of tickets, and let CAS clean the tickets as part of its own cleaner.
Troubleshooting Guidelines
-
You will need to ensure that network communication across CAS nodes is allowed and no firewall or other component is blocking traffic.
- If you are running this on a server with active firewalls, you will probably need to specify
a fixed
remoteObjectPort
, within thecacheManagerPeerListenerFactory
. - Depending on environment settings and version of Ehcache used, you may also have to adjust the
shared
setting . - Ensure that each cache manager specified a name that matches the Ehcache configuration itself.
- You may also need to adjust your expiration policy to allow for a larger time span, specially for service tickets depending on network traffic and communication delay across CAS nodes particularly in the event that a node is trying to join the cluster.