Revamp Targets, Database Manager, Database Connections and deploy features

I am having a hard time putting the pieces together. I think you guys need to take a step back and rethink how all these pieces should fit together because, in the current state, it is very messy and lacking.

Right now we have Targets and Database Manager.

Target

Hosting -> Own server
Usage: Development, Staging, Production
Access type: OS Folder, FTP, SFTP

Hosting -> Firebase
Usage: Development, Staging, Production
Web Server URL

Hosting -> Docker Engine
Usage: Development, Staging, Production
Connection -> Local
Connection -> Remote -> Docker Machine

Database Manager

Database Connections -> Server Connect or Direct
Database
Migrations

These features I can make work but don’t make sense to me. Specially since the introduction of the Database Manager.

Probably we just need renaming some old wappler terminology, tweaking some and then adding a bit more.

For me it should be something like this:

Top level we have:

Projects

We store configuration that is shared through out the whole project.

Next level we have:

Environments

Similar to targets in concept but with a few key differences.
Users can create whatever number of environments that you want.
Each environment has a user defined color, a name and possibly an icon. We do this because It would be nice to have visual indicators of which environment we are in.
Users can attach to this environment nodes/features/options/plugins that suit their needs. The list of nodes will keep growing as you add goodies to Wappler :slight_smile:

Nodes you can attach:

Docker Engine

Local (User will provide configuration like installation folders, run parameters, etc)

You can mix nodes from docker and local. I might want to run my DB on docker but want to use lighttpd on my desktop or via dockerfile. This would have to be handled by Wappler as there needs to be configuration when mixing both.

For the webserver you will have to provide the folder where the files will be. This is handled internally in the case of Docker.

For each DB engine node you add you will have to provide the root user and password. Pre-populated in the case of Docker.

In the database manager you can create databases and for each database you will have to define the connection details specific to that database. They would be pre-populated as we already have root access from the environment setup.

For some environments you may have access to certain nodes or not. For instance, I might not have access to my files in QA and PRD when hosting with Heroku. But I still want to be able to run my migrations there in a streamlined fashion. That will depend on our own pipeline which brings me to my next concept.

On top of the environments you have:

Pipelines

These are optional for the user. It will depend on the complexity of the app.

A pipeline defines the flow of steps your app follows from your development system to production and how you move your files in between. In the future you can add nodes in each step like testing, CI and bundling. But right now it could handle database migrations, GIT pushing, FTP uploads and Docker machines which are the current methods of moving files. So you add environments to the pipeline and define how you are moving files between them.

This structure or similar would allow Wappler to be more flexible as to how users develop and deploy. It would also streamline database provisioning and management. Because right now it’s all quite messy.

Sure this post are just some ideas. I can clearly identify a gap already in my approach. But I just wanted to let you know that the current implementation could be improved and make it more 2020ish. I believe what we currently have was OK when FTP was the only way of deploying from Wappler but now that we are starting to have more options it needs to be re-imagined.

What do you all think?

Community Page
Last updated: