Storage Plug-ins!!! Plug-in preview for the new storage frameworks with SolidFire
Jan 15, 2015
Storage Plug-ins!!!
Plug-in preview for the new storage frameworks with SolidFire
Mike Tutkowski• Lead Open Source Developer @SolidFire• Dedicated to CloudStack Development
McClain Buggle• Strategic Alliance Manager @SolidFire
Who are these guys?
The CloudStack Opportunity
Cloud is happening NOWPain points are REALLACK of viable alternativesOpportunity is MASSIVEAWS is not standing still
Why the Urgency Around CloudStack?
Ap
plic
atio
n V
alu
e /
Ma
rgin
High
Med
Low
IOP
S
What is the opportunity?
Low High
Performance Sensitive AppsCRM / ERP / DatabaseMessaging / ProductivityDesktop
Performance Sensitivity
$$$
$$
$
ApplicationsDev / TestBackup / Archive
We’ve seen this movie before...
x86 Virtualization – The Test/Dev Era
Is this a re-run?
Cloud Computing – The Test/Dev Era
Why yes, it is!
The Test/Dev Era – x86 Virtualization vs. Cloud Computing
x86 Virtualization
This movie ended well…
x86 Virtualization – From Test/Dev to Production
Cloud Computing
This ending is still being written
The Production Era Opportunity: x86 Virtualization vs. Cloud
How do we influence the outcome?
Ma
rgin
High
Med
Low
IOP
S
Low High
Performance Sensitive AppsCRM / ERP / DatabaseMessaging / ProductivityDesktop
Performance Sensitivity
$$$
$$
$
ApplicationsDev / TestBackup / Archive
Key Cloud Infrastructure Innovations• Availability• Performance• Quality-of-Service• Scalability• Automation
• Storage is a major pain-point in most early-cloud deployments• Unpredictable Performance• Not designed for Multi-tenancy• Storage a key underpinning to successful application deployments
• Today = Backup/Archive, Dev/Test• Tomorrow = Mission & Business Critical Applications
What does this have to do with CloudStack?
Where we are today with a storage plug-in
Primary Storage in CloudStack
CloudStack was not designed for dynamic provisioning and does not leverage vendor unique storage features within the framework.
For SolidFire we are interested in features that allow users to select minimum, maximum, and burst IOPS for a given volume.
Use Cases for a CloudStack Plug-In
Ability to defer the creation of a volume until the moment the end user elects to execute a Compute or Disk Offering.
Still have CS Admin configure the Primary Storage, but now it is based on a plug-in instead of on a pre-existing storage volume.
No requirement on part of the CSP to write orchestration logic.
My Specific Needs from the Plug-in
A CloudStack storage plug-in is divided into three components:
Provider: Logic related to the plug-in in general (ex: name of plug-in).
Life Cycle: Logic related to life cycle (ex: creation) of a given storage system (ex: a single SolidFire SAN).
Driver: Logic related to creating and deleting volumes on the storage system.
Must add a dependency in the client/pom.xml file as such:
<dependency> <groupId>org.apache.cloudstack</groupId> <artifactId>cloud-plugin-storage-volume-solidfire</artifactId> <version>${project.version}</version> </dependency>
So…how do you actually make a plug-in?
Must implement the PrimaryDataStoreProvider interface.
Provides CloudStack with the plug-in's name as well as the Life Cycle and Driver objects the storage system uses.
Must be listed in the applicationContext.xml.in file (Spring Framework related).
A single instance of this class is created for CloudStack.
Provider – About
public interface PrimaryDataStoreProvider extends DataStoreProvider {}
public interface DataStoreProvider { public static enum DataStoreProviderType { PRIMARY, IMAGE } public DataStoreLifeCycle getDataStoreLifeCycle(); public DataStoreDriver getDataStoreDriver(); public HypervisorHostListener getHostListener(); public String getName(); public boolean configure(Map<String, Object> params); public Set<DataStoreProviderType> getTypes(); }
Provider – Interface
public class SolidfirePrimaryDataStoreProvider implements PrimaryDataStoreProvider { private final String providerName = "SolidFire";
protected PrimaryDataStoreDriver driver; protected HypervisorHostListener listener; protected DataStoreLifeCycle lifecyle;
@Override public String getName() { return providerName; }
@Override public DataStoreLifeCycle getDataStoreLifeCycle() { return lifecyle; }
@Override public boolean configure(Map<String, Object> params) { lifecyle = ComponentContext.inject(SolidFirePrimaryDataStoreLifeCycle.class); driver = ComponentContext.inject(SolidfirePrimaryDataStoreDriver.class); listener = ComponentContext.inject(DefaultHostListener.class);
return true; }
Provider – Implementation
Notes: client/tomcatconf/applicationContext.xml.in Each provider adds a single line. “id” is only used by Spring Framework (not by CS Management Server). Recommend just providing a descriptive
name.
Example:<bean id="ClassicalPrimaryDataStoreProvider"
class="org.apache.cloudstack.storage.datastore.provider.CloudStackPrimaryDataStoreProviderImpl" />
<bean id="solidFireDataStoreProvider" class="org.apache.cloudstack.storage.datastore.provider.SolidfirePrimaryDataStoreProvider" />
Provider – Configuration
Must implement the PrimaryDataStoreLifeCycle interface.
Handles the creation, deletion, etc. of a storage system (ex: SAN) in CloudStack.
The initialize method of the Life Cycle object adds a row into the cloud.storage_pool table to represent a newly added storage system.
Life Cycle – About
public interface PrimaryDataStoreLifeCycle extends DataStoreLifeCycle {}
public interface DataStoreLifeCycle { public DataStore initialize(Map<String, Object> dsInfos); public boolean attachCluster(DataStore store, ClusterScope scope); public boolean attachHost(DataStore store, HostScope scope, StoragePoolInfo existingInfo); boolean attachZone(DataStore dataStore, ZoneScope scope); public boolean dettach(); public boolean unmanaged(); public boolean maintain(DataStore store); public boolean cancelMaintain(DataStore store); public boolean deleteDataStore(DataStore store);}
Life Cycle – Interface
@Override public DataStore initialize(Map<String, Object> dsInfos) { String url = (String)dsInfos.get("url"); String uuid = getUuid(); // maybe base this off of something already unique Long zoneId = (Long)dsInfos.get("zoneId"); String storagePoolName = (String) dsInfos.get("name"); String providerName = (String)dsInfos.get("providerName"); PrimaryDataStoreParameters parameters = new PrimaryDataStoreParameters();
parameters.setHost("10.10.7.1"); // really get from URL parameters.setPort(3260); // really get from URL parameters.setPath(url); parameters.setType(StoragePoolType.IscsiLUN); parameters.setUuid(uuid); parameters.setZoneId(zoneId); parameters.setName(storagePoolName); parameters.setProviderName(providerName); return dataStoreHelper.createPrimaryDataStore(parameters); }
Life Cycle – Implementation
Must implement the PrimaryDataStoreDriver interface.
Your opportunity to create or delete a volume and to add a row to or delete a row from the cloud.volumes table.
A single instance of this class is responsible for creating and deleting volumes on all storage systems of the same type.
Driver – About
public interface PrimaryDataStoreDriver extends DataStoreDriver { public void takeSnapshot(SnapshotInfo snapshot, AsyncCompletionCallback<CreateCmdResult> callback); public void revertSnapshot(SnapshotInfo snapshot, AsyncCompletionCallback<CommandResult> callback);}
public interface DataStoreDriver { public String grantAccess(DataObject data, EndPoint ep); public boolean revokeAccess(DataObject data, EndPoint ep); public Set<DataObject> listObjects(DataStore store); public void createAsync(DataObject data, AsyncCompletionCallback<CreateCmdResult> callback); public void deleteAsync(DataObject data, AsyncCompletionCallback<CommandResult> callback); public void copyAsync(DataObject srcdata, DataObject destData,
AsyncCompletionCallback<CopyCommandResult> callback); public boolean canCopy(DataObject srcData, DataObject destData); public void resize(DataObject data, AsyncCompletionCallback<CreateCmdResult> callback);}
Driver – Interface
public void createAsync(DataObject data, AsyncCompletionCallback<CreateCmdResult> callback) { String iqn = null;
try { VolumeInfo volumeInfo = (VolumeInfo)data;
iqn = createSolidFireVolume(volumeInfo);
VolumeVO volume = new VolumeVO(volumeInfo);
volume.setPath(iqn);
volumeDao.persist(volume); } catch (Exception e) {
s_logger.debug("Failed to create volume (Exception)", e); } CreateCmdResult result = new CreateCmdResult(iqn, errMsg == null ? data.getSize() : null);
result.setResult(errMsg);
callback.complete(result); }
Driver – Implementation
Ask the CS MS to provide a list of all storage providers
http://127.0.0.1:8080/client/api?command=listStorageProviders&type=primary&response=json
Ask the CS MS to add a Primary Storage (a row in the cloud.storage_pool table) based on your plug-in (ex: make CloudStack aware of a SolidFire SAN)
http://127.0.0.1:8080/client/api?command=createStoragePool&scope=zone&zoneId=a7af53b4-ec15-4afc-a9ee-8cba82b43474&name=SolidFire_831569365&url=MVIP%3A192.168.138.180%3BSVIP%3A10.10.7.1&provider=SolidFire&response=json
Ask the CS MS to provide a list of all Primary Storages
http://127.0.0.1:8080/client/api?command=listStoragePools&response=json
API Calls
Need support for root disks. At the moment, the framework is mainly focused on data disks.
Need code to create datastores on ESX hosts and shared mount points on KVM hosts (we already have logic to create storage repositories on XenServer hosts).
Speaking in terms of XenServer (but true for other hypervisors), when a volume is attached or detached, we need logic in place that handles zone-wide storage.
No GUI support yet to add a provider...must be done with the API.
What’s left to do?
Trivia Question
The framework treats the default storage behavior as a plug-in
Why?