Persistent data¶
Tip
If you are writing your own files without using the Workflow
APIs, see A note on Script Behaviour.
Alfred provides special data and cache directories for each Workflow (in
~/Library/Application Support
and ~/Library/Caches
respectively).
Workflow
provides the following
attributes/methods to make it easier to access these directories:
datadir
— The full path to your Workflow’s data directory.cachedir
— The full path to your Workflow’s cache directory.datafile(filename)
— The full path tofilename
under the data directory.cachefile(filename)
— The full path tofilename
under the cache directory.
The cache directory may be deleted during system maintenance, and is thus only
suitable for temporary data or data that are easily recreated.
Workflow
’s cache methods reflect this, and make it easy to replace
cached data that are too old. See Caching data for
details of the data caching API.
The data directory is intended for more permanent, user-generated data, or data that cannot be otherwise easily recreated. See Storing data for details of the data storage API.
It is easy to specify a custom file format for your stored data
via the serializer
argument if you want your data to be readable by the user
or by other software. See Serialization of stored/cached data for more details.
Tip
There are also simliar methods related to the root directory of your
Workflow (where info.plist
and your code are):
workflowdir
— The full path to your Workflow’s root directory.workflowfile(filename)
— The full path tofilename
under your Workflow’s root directory.
These are used internally to implement “Magic” arguments, which provide assistance with debugging, updating and managing your workflow.
In addition, Workflow
also provides a
convenient interface for storing persistent settings with
Workflow.settings
.
See Settings and Keychain access for more
information on storing settings and sensitive data.
Caching data¶
Workflow
provides a few methods to simplify
caching data that are slow to retrieve or expensive to generate (e.g. downloaded
from a web API). These data are cached in your workflow’s cache directory (see
cachedir
). The main method is
Workflow.cached_data()
, which
takes a name under which the data should be cached, a callable to retrieve
the data if they aren’t in the cache (or are too old), and a maximum age in seconds
for the cached data:
1from workflow import web, Workflow
2
3def get_data():
4 return web.get('https://example.com/api/stuff').json()
5
6wf = Workflow()
7data = wf.cached_data('stuff', get_data, max_age=600)
To retrieve data only if they are in the cache, call with None
as the
data-retrieval function (which is the default):
data = wf.cached_data('stuff', max_age=600)
Note
This will return None
if there are no corresponding data in the cache.
This is useful if you want to update your cache in the background, so it doesn’t impact your Workflow’s responsiveness in Alfred. (See the tutorial for an example of how to run an update script in the background.)
Tip
Passing max_age=0
will return the cached data regardless of age.
Clearing cached data¶
There is a convenience method for clearing a workflow’s cache directory.
clear_cache()
will by default delete all
the files contained in cachedir
. This is
the method called if you use the workflow:delcache
or workflow:reset
magic arguments.
You can selectively delete files from the cache by passing the optional
filter_func
argument to clear_cache()
.
This callable will be called with the filename (not path) of each file in the
workflow’s cache directory.
If filter_func
returns True
, the file will be deleted, otherwise it
will be left in the cache. For example, to delete all .zip
files in the
cache, use:
1def myfilter(filename):
2 return filename.endswith('.zip')
3
4wf.clear_cache(myfilter)
or more simply:
1wf.clear_cache(lambda f: f.endswith('.zip'))
Session-scoped cache¶
New in version 1.25.
Changed in version 1.27.
Note
This feature requires Alfred 3.2 or newer.
The cache_data()
and
cached_data()
methods of
Workflow
have an additional session
parameter.
If set to True
, the cache name is prefixed with the
session_id
, so the cache expires
as soon as the user closes Alfred or uses a different workflow.
This is useful for workflows that use data that become invalid as soon as the user switches away, such as a list of current browser tabs.
Important
Alfred-PyWorkflow doesn’t automatically clear up stale session data; you have to do that yourself.
Use clear_session_cache()
to delete stale
cached session data. Pass current=True
to also delete data for
the current session.
Storing data¶
Workflow
provides two methods to store
and retrieve permanent data:
store_data()
and
stored_data()
.
These data are stored in your workflow’s data directory
(see datadir
).
1from workflow import Workflow
2
3wf = Workflow()
4wf.store_data('name', data)
5# data will be `None` if there is nothing stored under `name`
6data = wf.stored_data('name')
These methods do not support the data expiry features of the cached data methods, but you can specify your own serializer for each datastore, making it simple to store data in, e.g., JSON or YAML format.
You should use these methods (and not the data caching ones) if the data you are saving should not be deleted as part of system maintenance.
If you want to specify your own file format/serializer, please see Serialization of stored/cached data for details.
Clearing stored data¶
As with cached data, there is a convenience method for deleting all the files
stored in your workflow’s datadir
.
By default, clear_data()
will delete all the
files stored in datadir
. It is used by the
workflow:deldata
and workflow:reset
magic arguments.
It is possible to selectively delete files contained in the data directory by
supplying the optional filter_func
callable. Please see Clearing cached data
for details on how filter_func
works.
Settings¶
Tip
Alfred 3.6 introduced an API to update the workflow variables stored in the workflow’s configuration sheet (i.e. the values are persisted across workflow runs). See Workflow variables if you’d prefer to store your workflow’s settings there.
Workflow.settings
is a subclass of dict
that automatically
saves its contents to the settings.json
file in your Workflow’s data
directory when it is changed.
Settings
can be used just like a normal
dict
with the caveat that all keys and values must be serializable
to JSON.
Warning
A Settings
instance can only automatically
recognise when you directly alter the values of its own keys:
1wf = Workflow()
2wf.settings['key'] = {'key2': 'value'} # will be automatically saved
3wf.settings['key']['key2'] = 'value2' # will *not* be automatically saved
If you’ve altered a data structure stored within your workflow’s
Workflow.settings
, you need to explicitly call
Workflow.settings.save()
.
If you need to store arbitrary data, you can use the cached data API.
If you need to store data securely (such as passwords and API keys),
Workflow
also provides simple access to
the macOS Keychain.
Keychain access¶
Methods Workflow.save_password(account, password)
,
Workflow.get_password(account)
and Workflow.delete_password(account)
allow access to the Keychain. They may raise PasswordNotFound
if no
password is set for the given account
or KeychainError
if
there is a problem accessing the Keychain. Passwords are stored in the user’s
default Keychain. By default, the Workflow’s Bundle ID will be used as the
service name, but this can be overridden by passing the service
argument
to the above methods.
Example usage:
1from workflow import Workflow
2
3wf = Workflow()
4
5wf.save_password('aol', 'hunter2')
6
7password = wf.get_password('aol')
8
9wf.delete_password('aol')
10
11# raises PasswordNotFound exception
12password = wf.get_password('aol')
See the relevant part of the tutorial for a full example.
A note on Script Behaviour¶
In version 2.7, Alfred introduced a new Script Behaviour setting for Script Filters. This allows you (among other things) to specify that a running script should be killed if the user continues typing in Alfred.
If you enable this setting, it’s possible that Alfred will terminate your
script in the middle of some critical code (e.g. writing a file).
Alfred-PyWorkflow provides the uninterruptible
decorator to prevent your script being terminated in the middle of a
critical function.
Any function wrapped with uninterruptible
will
be executed fully, and any signal caught during its execution will be
handled when your function completes.
For example:
1from workflow.workflow import uninterruptible
2
3@uninterruptible
4def critical_function():
5 # Your critical code here
If you only want to write to a file, you can use the
atomic_writer
context manager. This does not
guarantee that the file will be written, but does guarantee that it will
only be written if the write succeeds (the data are first written to a temporary
file).