MscmpSystDb (mscmp_syst_db v0.1.0)

A database management Component for developing and managing database-per-tenant oriented systems. To achieve this we wrap and extend the popular Ecto and EctoSql libraries with a specialized templated (EEx) migrations system and add additional, opinionated abstractions encapsulating the tenant model as it relates to development, data access, and runtime concerns.

Important

"Database-per-tenant" is not the typical tenancy implementation pattern for Elixir/Phoenix based applications. As with most choices in software architecture and engineering there are trade-offs between the different tenancy approaches that you should be well-versed with prior to committing to this or any other tenancy model for your applications.

Concepts

There are several concepts requiring definitions which should be understood before continuing. Most of these concepts relate to runtime concerns though understanding them will inform your sense of the possibilities and constraints on development and deployment scenarios.

Datastore

A Datastore can most simply be thought of as a single database created to support either a tenant environment or an administrative function of the application. More specifically speaking, a Datastore establishes a store of data and a security boundary at the database level for the data of a tenant or of administrative functionality.

Using MscmpSystDb.create_datastore/2 automatically will create the database backing the Datastore.

Datastores and the Ecto dynamic repositories which back them are started and stopped at runtime using this Component's API. Datastores are not typically started directly via OTP application related functionality at application startup. This is chiefly because we don't assume to even know what Datastores actually exist until we've started up an administrative Datastore which records the information.

Datastore Context

A Datastore Context represents a PostgreSQL database role which is used to establish Datastore access and security contexts using database security features. Datastore Contexts are specific to a single Datastore and are managed by the this Component, including the creation, maintenance, and dropping of them as needed, typically in conjunction with Datastore creation/deletion.

Behind the scenes Datastore Contexts use the "Ecto Dynamic Repositories" feature. Each Datastore Context is backed by an Ecto Dynamic Repo. Starting a Datastore Context starts its Ecto Dynamic Repo including establishing the connections to the database. Stopping a Datastore Context shuts that associated Dynamic Repo down and terminates its database connections.

There are several different kinds of Datastore Contexts which can be defined:

  • Owner: This kind of Datastore Context creates a database role to serve as the database owner of all the database objects backing the Datastore making it the de facto admin role for the Datastore. While the Owner Datastore Context owns the database objects backing the Datastore, it is only a regular database role (no special database rights) and it cannot be a database login role itself. All Datastores must have exactly one Owner Datastore Context defined.

  • Login: The Login Datastore Context is a regular database role with which the application can log into the database and perform operations allowed by the database security policies established by the database developer. There can be one or more Login Datastore Contexts in order to support various security profiles that the application may assume or in order to build connection pools with varying limits depending on some application specific need (e.g. connections support web user interface vs. connections supporting external API interactions.). For a Datastore to be useful there must be at least one Login Datastore Context defined for the Datastore.

  • Non-Login: While the Owner Datastore Context is required, there are other possible scenarios where non-login roles could be useful in managing access to specific database objects, but how useful Non-Login roles might be will depend on application specific factors; the expectation is that their use will be rare. Naturally, there is no requirement for Non-Login Datastore Contexts to be defined for any Datastore.

Finally, when we access the database from the application we'll always be doing so identifying one of our Login Datastore Contexts. This is done using MscmpSystDb.put_datastore_context/1 which behind the scenes is using the Ecto.Repo dynamic repository features (Ecto.Repo.put_dynamic_repo/1). Note that there is no default Ecto Repo, dynamic or otherwise, defined in the system. Any attempts to access a Datastore Context without having established the current Datastore Context for the process will cause the process to crash.

Warning!

Datastore Contexts are created and destroyed by the application using the API functions in this Component. The current implementation of Login Datastore Contexts, however, is expected to have certain security weaknesses related to database role credential management.

With this in mind, do not look to our implementation as an example of how to approach such a problem until this and other warnings disappear. The reality is that while in certain on-premises scenarios our current approach might well be workable, it was designed with the idea of kicking the can of a difficult and sensitive problem down the road and not as a final solution that we'd stand behind. We do believe this problem is solvable with sufficient time and expertise.

Database Development

Our development model assumes that there are fundamentally two phases of development related to the database: Initial/Major Development and Ongoing Maintenance.

Initial/Major Development

When initially developing a database schema, prior to any releases of usable software the typical "migrations" oriented development pattern of a continuing sequence of incremental changes is significantly less useful than it is during later, maintenance oriented phases of development. During initial development it is more useful to see database schema changes through the lens of traditional source control methodologies. The extend to which this is true will naturally vary depending on the application. Larger, database-centric applications will benefit from this phase of development significantly more than smaller applications where the database is simple persistence and data isn't significant beyond this persistence support role.

Ongoing Maintenance

Once there is an active release of the software and future deployments will be focused on maintaining already running databases, our model shifts to the norms typical of the traditional migrations database development model. We expect smaller, relatively independent changes which are simply applied in sequence. Unlike other migration tools such as the migrator built into EctoSql, we have some additional ceremony related to sequencing migrations, but aside from these minor differences our migrations will resemble those of other tools once in the maintenance phase of development.

Note

Despite the discussion above, the distinction between "Initial/Major Development" and "Ongoing Maintenance" is a distinction in developer practice only; the tool itself doesn't make this distinction but merely is designed to work in a way which supports a workflow recognizing these phases. The cost of being able to support the Initial/Major Development concept is that migrations are not numbered or sequenced automatically as will be shown below. If you don't need the Initial/Major Development phase, the traditional EctoSql migrator may be more suitable to your needs.

Source Files & Building Migrations Overview

In the more typical migrations model, the migration files are themselves the source code of the database changes. This Component separates the two concepts:

  • Database source code files are written by the developer as the developer sees fit. Database source files are what we are most concerned with from a source control perspective; and these files can be freely modified and changes committed up to the point that they are built into released migrations. Database source files are written in (mostly) plain SQL; EEx tags are allowed in the SQL and can be bound during migration time.

  • Once the database source code has reached some stage of completion, the developer can use the mix builddb task to generate migration files from the database sources. In order to build the migration files, the developer will create a TOML "build plan" file which indicates which database source files to include in the migrations and their sequence. For more about the build process and build plans see the mix builddb task documentation.

Now let's connect this back to the development phases discussed previously. During the "Initial/Major Development" phase, we expect that there will be many database source files and that these files will be written, committed to source control, modified, and re-committed to source control not as migrations but as you would any other source file (for example, maybe one file per table.); we might also be building migration files at this time for testing purposes, but until the application is released we'd expect the migration files to be cleaned out and rebuilt. Finally once tests, code, reviews, etc. are complete and a release is ready to be prepared, a final mix builddb is run to create the release migrations and those migrations are committed to source control.

From this point forward we generally wouldn't modify the original database source files or the final release migrations: the release migrations are essentially set in stone once they may be deployed to a environment where dropping the database is not an option. Subsequent development in the "Ongoing Maintenance" phase looks similar to traditional migration development. For any modification to the database you'll create new a database source file(s) for those modifications specifically and they'll get new version numbers which will in turn create new migrations when mix builddb builds them. These will then be deployed to the database as standard migrations would.

Migration Deployments

Once built, migration files are deployed to a database similar to the way traditional migration systems perform their deployments: the migrations are checked, in migration number order, against a special database table listing the previously deployed migrations (table ms_syst_db.migrations). If a migration has been previously deployed, it's skipped and the deployment process moves onto the next migration; if the migration needs to be deployed it is applied to the database and, assuming successful deployment, the process moves onto the next migration or exits when all outstanding migrations have been applied.

Each individual migration is applied in a single database transaction. This means that if part of a migration fails to apply to the database successfully, the entire migration is rolled back and the database will be in the state of the last fully successful migration application. A migration application failure will abort the migration process, cancelling the attempted application of migrations after the failed migration.

Unlike the EctoSql based migration process, migrations in MscmpSystDb are expected to be managed at runtime by the application. There is no external mix oriented migration deployment process. Migration processes are started for each tenant database individually allowing for selective application of migrations to the specified environment or allowing for "upgrade window" style functionality. Migrations are also EEx templates and template bindings can be supplied to the migrator to make each deployment specific to the database being migrated if appropriate. Naturally, much depends on the broader application design, but the migrator can support a number of different scenarios for deployment of database changes.

Finally, the migrator, can in a single application, manage and migrate different database schemas/migration sets depending on the identified "type". This means that different database schemas for different subsystems can be supported by the migration system in a single application. This assumes that a single database is of a single type; that type may be any of the available types, but mixing of types in a single database is not allowed.

Custom Database Types

Ecto, EctoSql, and the underlying PostgreSQL library Postgrex offer decent PostgreSQL data type support out of box, but they don't directly map some of the database types that can be helpful in business software such as PostgreSQL range types, internet address types, and interval types. To this end we add some custom database data types via the modules in the MscmpSystDb.DbTypes.* namespace.

Data Access Interface

The Ecto library offers a data access and manipulation API via the Ecto.Repo module. We wrap and in some cases extend the majority of that functionality in this Component as documented in the Query section. As a rule of thumb, you want to call on this module for such needs even if the same can be achieved with the Ecto library. This recommendation is not meant to suggest that you shouldn't use the Ecto.Query related DSL or methods for constructing queries; using the Ecto Query DSL is, in fact, recommended absent compelling reason to do otherwise.

Summary

Query

A convenience function that currently wraps the Ecto.Repo.aggregate/4 function.

A convenience function that currently wraps the Ecto.Repo.all/2 function.

A convenience function that currently wraps the Ecto.Repo.delete/2 function.

A convenience function that currently wraps the Ecto.Repo.delete!/2 function.

A convenience function that currently wraps the Ecto.Repo.delete_all/2 function.

A convenience function that currently wraps the Ecto.Repo.exists?/2 function.

A convenience function that currently wraps the Ecto.Repo.get/3 function.

A convenience function that currently wraps the Ecto.Repo.get!/3 function.

A convenience function that currently wraps the Ecto.Repo.get_by/3 function.

A convenience function that currently wraps the Ecto.Repo.get_by!/3 function.

A convenience function that currently wraps the Ecto.Repo.in_transaction?/0 function.

A convenience function that currently wraps the Ecto.Repo.insert/2 function.

A convenience function that currently wraps the Ecto.Repo.insert!/2 function.

A convenience function that currently wraps the Ecto.Repo.insert_all/3 function.

A convenience function that currently wraps the Ecto.Repo.insert_or_update/2 function.

A convenience function that currently wraps the Ecto.Repo.insert_or_update!/2 function.

A convenience function that currently wraps the Ecto.Repo.load/2 function.

A convenience function that currently wraps the Ecto.Repo.one/2 function.

A convenience function that currently wraps the Ecto.Repo.one!/2 function.

A convenience function that currently wraps the Ecto.Repo.preload/3 function.

A convenience function that currently wraps the Ecto.Repo.prepare_query/3 function.

Executes a database query and returns all rows.

Executes a database query and returns all rows. Raises on error.

Executes a database query but returns no results.

Executes a database query but returns no results. Raises on error.

Executes a database query and returns a single row.

Executes a database query and returns a single row. Raises on error.

Executes a database query returning a single value.

Executes a database query returning a single value. Raises on error.

Returns the record count of the given queryable argument.

A convenience function that currently wraps the Ecto.Repo.reload/2 function.

A convenience function that currently wraps the Ecto.Repo.reload!/2 function.

A convenience function that currently wraps the Ecto.Repo.rollback/1 function.

A convenience function that currently wraps the Ecto.Repo.stream/2 function.

A convenience function that currently wraps the Ecto.Repo.transaction/2 function.

A convenience function that currently wraps the Ecto.Repo.update/2 function.

A convenience function that currently wraps the Ecto.Repo.update!/2 function.

A convenience function that currently wraps the Ecto.Repo.update_all/3 function.

Datastore Management

Creates a new Datastore along with its contexts.

Creates database roles to back all requested Datastore contexts.

Drops a Datastore along with its contexts.

Returns the state of the requested contexts.

Returns the state of the database and database roles which back the Datastore and contexts, respectively, of the provided Datastore options definition.

Datastore Migrations

Returns the most recently installed database migration version number.

Updates a Datastore to the most current version of the given type of Datastore.

Runtime

Retrieves either atom name or pid/0 of the currently established Datastore context, unless no context has been established.

Establishes the Datastore Context to use for Datastore interactions in the Elixir process where this function is called.

Starts database connections for all of login contexts in the Datastore options.

Starts a database connection for the specific Datastore context provided.

Disconnects the database connections for all of the login Datastore option contexts.

Disconnects the database connection for the specific Datastore context provided.

Development Support

Drops a Datastore previously created by the load_database/2 function in support of development related activities.

Retrieves a populated MscmpSystDb.Types.DatastoreOptions.t/0 struct which can be used to facilitate database involving development activities.

Retrieves the name of the login Datastore Context typically used in development support.

Retrieves a populated MscmpSystDb.Types.DatastoreOptions.t/0 struct with defaults appropriate for interactive development support.

Retrieves the name of the login Datastore Context typically used in testing support.

Retrieves a populated MscmpSystDb.Types.DatastoreOptions.t/0 struct with defaults appropriate for setting up test script database services.

Creates a Datastore, related Datastore Contexts, and processes migrations for the identified type in support of development related activities.

Query

Link to this function

aggregate(queryable, aggregate, field, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.aggregate/4 function.

Link to this function

all(queryable, opts \\ [])

@spec all(Ecto.Queryable.t(), Keyword.t()) :: [Ecto.Schema.t()]

A convenience function that currently wraps the Ecto.Repo.all/2 function.

Link to this function

delete(struct_or_changeset, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.delete/2 function.

Link to this function

delete!(struct_or_changeset, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.delete!/2 function.

Link to this function

delete_all(queryable, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.delete_all/2 function.

Link to this function

exists?(queryable, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.exists?/2 function.

Link to this function

get(queryable, id, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.get/3 function.

Link to this function

get!(queryable, id, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.get!/3 function.

Link to this function

get_by(queryable, clauses, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.get_by/3 function.

Link to this function

get_by!(queryable, clauses, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.get_by!/3 function.

Link to this function

in_transaction?()

A convenience function that currently wraps the Ecto.Repo.in_transaction?/0 function.

Link to this function

insert(struct_or_changeset, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.insert/2 function.

Link to this function

insert!(struct_or_changeset, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.insert!/2 function.

Link to this function

insert_all(schema_or_source, entries_or_query, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.insert_all/3 function.

Link to this function

insert_or_update(changeset, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.insert_or_update/2 function.

Link to this function

insert_or_update!(changeset, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.insert_or_update!/2 function.

Link to this function

load(module_or_map, data)

A convenience function that currently wraps the Ecto.Repo.load/2 function.

Link to this function

one(queryable, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.one/2 function.

Link to this function

one!(queryable, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.one!/2 function.

Link to this function

preload(structs_or_struct_or_nil, preloads, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.preload/3 function.

Link to this function

prepare_query(operation, query, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.prepare_query/3 function.

Link to this function

query_for_many(query, query_params \\ [], opts \\ [])

@spec query_for_many(iodata(), [term()], Keyword.t()) ::
  {:ok,
   %{
     :rows => nil | [[term()] | binary()],
     :num_rows => non_neg_integer(),
     optional(atom()) => any()
   }}
  | {:error, MscmpSystError.t()}

Executes a database query and returns all rows.

Link to this function

query_for_many!(query, query_params \\ [], opts \\ [])

@spec query_for_many!(iodata(), [term()], Keyword.t()) :: %{
  :rows => nil | [[term()] | binary()],
  :num_rows => non_neg_integer(),
  optional(atom()) => any()
}

Executes a database query and returns all rows. Raises on error.

Link to this function

query_for_none(query, query_params \\ [], opts \\ [])

@spec query_for_none(iodata(), [term()], Keyword.t()) ::
  :ok | {:error, MscmpSystError.t()}

Executes a database query but returns no results.

Link to this function

query_for_none!(query, query_params \\ [], opts \\ [])

@spec query_for_none!(iodata(), [term()], Keyword.t()) :: :ok

Executes a database query but returns no results. Raises on error.

Link to this function

query_for_one(query, query_params \\ [], opts \\ [])

@spec query_for_one(iodata(), [term()], Keyword.t()) ::
  {:ok, [any()]} | {:error, MscmpSystError.t()}

Executes a database query and returns a single row.

Link to this function

query_for_one!(query, query_params \\ [], opts \\ [])

@spec query_for_one!(iodata(), [term()], Keyword.t()) :: [any()]

Executes a database query and returns a single row. Raises on error.

Link to this function

query_for_value(query, query_params \\ [], opts \\ [])

@spec query_for_value(iodata(), [term()], Keyword.t()) ::
  {:ok, any()} | {:error, MscmpSystError.t()}

Executes a database query returning a single value.

Link to this function

query_for_value!(query, query_params \\ [], opts \\ [])

@spec query_for_value!(iodata(), [term()], Keyword.t()) :: any()

Executes a database query returning a single value. Raises on error.

Link to this function

record_count(queryable, opts)

Returns the record count of the given queryable argument.

Link to this function

reload(struct_or_structs, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.reload/2 function.

Link to this function

reload!(struct_or_structs, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.reload!/2 function.

Link to this function

rollback(value)

A convenience function that currently wraps the Ecto.Repo.rollback/1 function.

Link to this function

stream(queryable, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.stream/2 function.

Link to this function

transaction(job, opts \\ [])

@spec transaction(
  (... -> any()) | Ecto.Multi.t(),
  keyword()
) :: {:error, MscmpSystError.t()} | {:ok, any()}

A convenience function that currently wraps the Ecto.Repo.transaction/2 function.

Link to this function

update(changeset, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.update/2 function.

Link to this function

update!(changeset, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.update!/2 function.

Link to this function

update_all(queryable, updates, opts \\ [])

A convenience function that currently wraps the Ecto.Repo.update_all/3 function.

Datastore Management

Link to this function

create_datastore(datastore_options, opts \\ [])

Creates a new Datastore along with its contexts.

The creation of a new Datastore includes creating new database to back the Datastore and database roles representing each of the Datastore contexts.

Link to this function

create_datastore_contexts(datastore_options, datastore_contexts, opts \\ [])

@spec create_datastore_contexts(
  MscmpSystDb.Types.DatastoreOptions.t(),
  [MscmpSystDb.Types.DatastoreContext.t(), ...],
  Keyword.t()
) ::
  {:ok, [MscmpSystDb.Types.ContextState.t(), ...]}
  | {:error, MscmpSystError.t()}

Creates database roles to back all requested Datastore contexts.

Usually Datastore contexts are created in the create_datastore/1 call, but over the course of time it is expected that applications may define new contexts as needs change. This function allows applications to add new contexts to existing Datastores.

Link to this function

drop_datastore(datastore_options, opts \\ [])

@spec drop_datastore(MscmpSystDb.Types.DatastoreOptions.t(), Keyword.t()) ::
  :ok | {:error, MscmpSystError.t()}

Drops a Datastore along with its contexts.

Dropping a Datastore will drop the database backing the Datastore from the database server as well as all of the database roles associated defined by the provided database options.

Prior to dropping the Datastore, all active connections to the Datastore should be terminated or the function call could fail.

Note that this is am irreversible, destructive action. Any successful call will result in data loss.

Link to this function

drop_datastore_contexts(datastore_options, datastore_contexts, opts \\ [])

@spec drop_datastore_contexts(
  MscmpSystDb.Types.DatastoreOptions.t(),
  [MscmpSystDb.Types.DatastoreContext.t(), ...],
  Keyword.t()
) :: :ok | {:error, MscmpSystError.t()}

Drops the requested Datastore contexts.

This function will drop the database roles from the database server that correspond to the requested Datastore contexts. You should be sure that the requested Datastore contexts do not have active database connections when calling this function as active connections are likely to result in an error condition.

Link to this function

get_datastore_context_states(datastore_contexts, opts \\ [])

@spec get_datastore_context_states(
  MscmpSystDb.Types.DatastoreOptions.t(),
  Keyword.t()
) ::
  {:ok, [MscmpSystDb.Types.ContextState.t(), ...]}
  | {:error, MscmpSystError.t()}

Returns the state of the requested contexts.

This function will check for each given context that: it exist, whether or not database connections may be started for it, and whether or not database connections have been started.

Note that only startable contexts are included in this list. If the context is not startable or has id: nil, the context will be excluded from the results of this function.

Link to this function

get_datastore_state(datastore_options, opts \\ [])

Returns the state of the database and database roles which back the Datastore and contexts, respectively, of the provided Datastore options definition.

Datastore Migrations

Link to this function

get_datastore_version(datastore_options, opts \\ [])

@spec get_datastore_version(MscmpSystDb.Types.DatastoreOptions.t(), Keyword.t()) ::
  {:ok, String.t()} | {:error, MscmpSystError.t()}

Returns the most recently installed database migration version number.

The version is returned as the string representation of our segmented version number in the format RR.VV.UUU.SSSSSS.MMM where each segment represents a Base 36 number for specific versioning purposes. The segments are defined as:

  • RR - The major feature release number in the decimal range of 0 - 1,295.

  • VV - The minor feature version within the release in the decimal range of 0 - 1,295.

  • UUU - The update patch number of the specified release/version in the decimal range of 0 - 46,655.

  • SSSSSS - Sponsor or client number for whom the specific migration or version is being produced for in the decimal range of 0 - 2,176,782,335.

  • MMM - Sponsor modification number in the decimal range of 0 - 46,655.

See mix builddb for further explanation version number segment meanings.

Link to this function

upgrade_datastore(datastore_options, datastore_type, migration_bindings, opts \\ [])

@spec upgrade_datastore(
  MscmpSystDb.Types.DatastoreOptions.t(),
  String.t(),
  Keyword.t(),
  Keyword.t()
) :: {:ok, [String.t()]} | {:error, MscmpSystError.t()}

Updates a Datastore to the most current version of the given type of Datastore.

If a Datastore is already up-to-date, this function is basically a "no-op" that returns the current version. Otherwise, database migrations for the Datastore type are applied until the Datastore is fully upgraded to the most recent schema version.

Runtime

Link to this function

current_datastore_context()

@spec current_datastore_context() :: atom() | pid()

Retrieves either atom name or pid/0 of the currently established Datastore context, unless no context has been established.

Link to this function

put_datastore_context(context)

@spec put_datastore_context(pid() | Ecto.Repo.t() | Ecto.Adapter.adapter_meta()) ::
  atom() | pid()

Establishes the Datastore Context to use for Datastore interactions in the Elixir process where this function is called.

Using this function will set the given Datastore Context in the Process Dictionary of the process from which the function call is made.

Link to this function

start_datastore(datastore_options, supervisor_name \\ nil)

@spec start_datastore(
  MscmpSystDb.Types.DatastoreOptions.t(),
  Supervisor.supervisor() | nil
) ::
  {:ok, :all_started | :some_started,
   [MscmpSystDb.Types.context_state_values()]}
  | {:error, MscmpSystError.t()}

Starts database connections for all of login contexts in the Datastore options.

Link to this function

start_datastore_context(datastore_options, context)

@spec start_datastore_context(
  MscmpSystDb.Types.DatastoreOptions.t(),
  atom() | MscmpSystDb.Types.DatastoreContext.t()
) :: {:ok, pid()} | {:error, MscmpSystError.t()}

Starts a database connection for the specific Datastore context provided.

Link to this function

stop_datastore(datastore_options_or_contexts, db_shutdown_timeout \\ 60000)

Disconnects the database connections for all of the login Datastore option contexts.

Link to this function

stop_datastore_context(context, db_shutdown_timeout \\ 60000)

@spec stop_datastore_context(
  pid() | atom() | MscmpSystDb.Types.DatastoreContext.t(),
  non_neg_integer()
) ::
  :ok

Disconnects the database connection for the specific Datastore context provided.

Development Support

Link to this function

drop_database(datastore_options)

@spec drop_database(MscmpSystDb.Types.DatastoreOptions.t()) :: :ok

Drops a Datastore previously created by the load_database/2 function in support of development related activities.

Security Note

This operation is specifically intended to support development and testing activities and should not be used in code which runs in production environments.

This function ensures that the Datastore is stopped and then drops it.

Parameters

Link to this function

get_datastore_options(opts \\ [])

@spec get_datastore_options(Keyword.t()) :: MscmpSystDb.Types.DatastoreOptions.t()

Retrieves a populated MscmpSystDb.Types.DatastoreOptions.t/0 struct which can be used to facilitate database involving development activities.

The DatastoreOptions will set all important values and identify two Datastore Contexts: the standard non-login, "owner" Context which will own the database objects and a single login Context which would be typically of an application context for accessing the database.

Security Note

The DatastoreOptions produced by this function are intended for use only in support of software development activities in highly controlled environments where real, user data is not at risk of being compromised. The values included in function effectively bypass a number of the security measures and assumptions in order to facilitate developer convenience.

Currently this function does not support scenarios where more login Contexts may be useful.

Parameters

  • opts - an optional parameter consisting of type Keyword.t/0 containing values which will override the function supplied defaults. The available options are:

    • database_name - a binary value indicating a name for the database to use. The default database name is ms_devsupport_database.

    • datastore_code - a binary value providing a Datastore level salting value used in different hashing operations. The default value is "musesystems.publicly.known.insecure.devsupport.code"

    • datastore_name - a name for use by the application to identify a given Datastore. This value will often time be the same as the database_name value. This value is converted to an atom. The default value is ms_devsupport_database.

    • description_prefix - a binary value which is prefixed to the descriptions of the created database contexts and which appear in the database role descriptions. The default value is "Muse Systems DevSupport".

    • database_role_prefix - a binary value which is prefixed to the names of the database roles created to back the Datastore Contexts. The default value is ms_devsupport.

    • context_name - a binary value which provides a unique context name for the login Context identified by this function. This value is converted to an atom by this function. The default value is ms_devsupport_context.

    • database_password - a binary value which is the database password that the login Datastore Context uses to log into the database. The default value is "musesystems.publicly.known.insecure.devsupport.apppassword".

    • starting_pool_size - the number of database connections the login Context will establish from the application. The default value is 5.

    • db_host - a string indicating the host address of the database server. This can be an IP address or resolvable DNS entry. The default value is 127.0.0.1.

    • db_port - an integer indicating the TCP port on which to contact the database server. The default value is the standard PostgreSQL port number 5432.

    • server_salt - a binary value providing a Datastore level salting value used in different hashing operations. The default value is "musesystems.publicly.known.insecure.devsupport.salt"

    • dbadmin_password - a binary value for the standard ms_syst_privileged database role account created via the database bootstrapping script. The default value is "musesystems.publicly.known.insecure.devsupport.password".

    • dbadmin_pool_size - the number of database connections which will be opened to support DBA or Privileged operations. The default value is 1.

Link to this function

get_devsupport_context_name()

@spec get_devsupport_context_name() :: atom()

Retrieves the name of the login Datastore Context typically used in development support.

This is a way to retrieve the standard development support name for use with functions such as put_datastore_context/1

Link to this function

get_devsupport_datastore_options(opts \\ [])

@spec get_devsupport_datastore_options(Keyword.t()) ::
  MscmpSystDb.Types.DatastoreOptions.t()

Retrieves a populated MscmpSystDb.Types.DatastoreOptions.t/0 struct with defaults appropriate for interactive development support.

Currently this function is simply an alias for get_datastore_options/1. All documentation for that function applies to this function.

Parameters

  • opts - an optional parameter consisting of type Keyword.t/0 containing values which will override the function supplied defaults. The available options are the same as those for get_datastore_options/1
Link to this function

get_testsupport_context_name()

@spec get_testsupport_context_name() :: atom()

Retrieves the name of the login Datastore Context typically used in testing support.

This is a way to retrieve the standard testing support name for use with functions such as put_datastore_context/1

Link to this function

get_testsupport_datastore_options(opts \\ [])

@spec get_testsupport_datastore_options(Keyword.t()) ::
  MscmpSystDb.Types.DatastoreOptions.t()

Retrieves a populated MscmpSystDb.Types.DatastoreOptions.t/0 struct with defaults appropriate for setting up test script database services.

This function calls get_datastore_options/1 with alternate defaults suitable for running test scripts independently from database environments targeted to interactive development. Documentation for that function will largely apply for this function, except as specifically contradicted here.

Parameters

  • opts - an optional parameter consisting of type Keyword.t/0 containing values which will override the function supplied defaults. The available options are:

    • database_name - a binary value indicating a name for the database to use. The default database name is ms_testsupport_database.

    • datastore_code - a binary value providing a Datastore level salting value used in different hashing operations. The default value is "musesystems.publicly.known.insecure.testsupport.code"

    • datastore_name - a name for use by the application to identify a given Datastore. This value will often time be the same as the database_name value. This value is converted to an atom. The default value is ms_testsupport_database.

    • description_prefix - a binary value which is prefixed to the descriptions of the created database contexts and which appear in the database role descriptions. The default value is "Muse Systems TestSupport".

    • database_role_prefix - a binary value which is prefixed to the names of the database roles created to back the Datastore Contexts. The default value is ms_testsupport.

    • context_name - a binary value which provides a unique context name for the login Context identified by this function. This value is converted to an atom by this function. The default value is ms_testsupport_context.

    • database_password - a binary value which is the database password that the login Datastore Context uses to log into the database. The default value is "musesystems.publicly.known.insecure.testsupport.apppassword".

    • starting_pool_size - the number of database connections the login Context will establish from the application. The default value is 5.

    • db_host - a string indicating the host address of the database server. This can be an IP address or resolvable DNS entry. The default value is 127.0.0.1.

    • db_port - an integer indicating the TCP port on which to contact the database server. The default value is the standard PostgreSQL port number 5432.

    • server_salt - a binary value providing a Datastore level salting value used in different hashing operations. The default value is "musesystems.publicly.known.insecure.testsupport.salt"

    • dbadmin_password - a binary value for the standard ms_syst_privileged database role account created via the database bootstrapping script. The default value is "musesystems.publicly.known.insecure.devsupport.password".

    • dbadmin_pool_size - the number of database connections which will be opened to support DBA or Privileged operations. The default value is 1.

Link to this function

load_database(datastore_options, datastore_type)

@spec load_database(MscmpSystDb.Types.DatastoreOptions.t(), String.t()) ::
  {:ok, [String.t()]} | {:error, MscmpSystError.t()}

Creates a Datastore, related Datastore Contexts, and processes migrations for the identified type in support of development related activities.

Security Note

This operation is specifically intended to support development and testing activities and should not be used in code which runs in production environments.

This is a simplified and condensed version of the full process of database creation.

Parameters