Is a file interface “The API” of the future?
by Michael Hay on Apr 4, 2010
Recently Bluearc released/open sourced an API for controlling/monitoring migration and notifying subscribers of added/changed/deleted file system objects. Essentially, their API mirrors the Linux/UNIX “/proc” model of a virtual file system access point that can be used to monitor or control a running system. In other news HCP V3 also contains a per namespace “/proc” that provides key statistics for each namespace/tenant combination. This will include information in XML like the namespace properties:
<namespace name="support" nameIDNA="support" versioningEnabled="true" searchEnabled="true" retentionMode="enterprise" defaultShredValue="false" defaultIndexValue="true" defaultRetentionValue="0" hashScheme="SHA-256" dpl="2"> <description> <![CDATA[Technical Support department ]]> </description> ...
or statistics from the namespace:
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="/static/xsl/proc-statistics.xsl"?> <statistics xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="/static/xsd/proc-statistics.xsd" namespaceName="finance" totalCapacity="10737418240" usedCapacity="932454739" softQuota="85" objectCount="43230" shredObjectCount="0" shredObjectBytes="0" customMetadataObjectCount="6754" customMetadataObjectBytes="894893"/>
Note that there is more in the Hitachi Content Platform’s (HCP) documentation which shows you how to get access to the above information through Python and CURL. (Just for grins there is a quick recipe at the end of this post, literally below the line.) However a detailed cookbook of how to get information from HCP is really not what this post is about. Instead it is taking the ideas from Linux, HNAS and HCP in a different direction. The direction that I want to take is simply what if the management API for storage infrastructure was a file system or pseudo-file system interface? I’ll talk more about this later in another post, but I want to know firstly if there is anyone out there who finds command and control of storage through a file system interface of interest?
|The basic python code using pycurl to access a namespace and get some statistics from an HCP V3 system is as follows (note that the pattern is very similar for getting other information about a namespace):|
curl = pycurl.Curl() curl.setopt(pycurl.COOKIE, "hcp-ns-auth=b...") curl.setopt(pycurl.URL, "https://finance.europe.hcp.example.com/proc/statistics") curl.perform() print curl.getinfo(pycurl.RESPONSE_CODE) curl.close()
Comments (2 )
For block oriented storage it would make sense if the storage array exposes a NFS or CIFS mount point per attached server. That way the server can update metadata per attached LUN as to the blocks that actually contain data. Then the Storage array can reclaim unused blocks and thus make thin provisioning more efficient.
The server can also send out of band messages via the mount point to the array when a capacity depletion alert is issued and the array can then automatically expand the pool or steal storage from other pools instead of the current scenario which requires manual intervention.
Great point Vinod. This is one use case that I had not thought of. I was actually thinking about things like creating snapshots, provisioning LUNs/shares, affecting port configurations, etc. Further I think that there is a solid approach to messaging streams similar to SNMP traps or CIM Indications that can notify subscribers (meaning those who read the file) of events relevant to the system. Personally from developing against HTTP based APIs versus the file system API, I would take the latter and not the former.