DBNB Strategic consulting

About the company

As a partner in a small consulting company that i structured to delivery projects on Digital Transformation.

During this challenging period, I had the opportunity to develop several projects, among which I would highlight the followingÇ

Main customer projects delivered

The main purpose of the hole project was the migration of a monolith application to microservices on AKS - Azure Kubernetes Services.
  • Environment Definition: I designed the tree basic environment (dev, stg, prod) and another one to hold the tools supporting the applications and a sandbox to run the many POCs created during the migration project.
  • Virtual Network: Virtual Network and SubNetwork design on each environment provisioning a /20 network per environment.
  • Registry: Designing and implementing the ACR - Azure Container Registry to hold the applications to be installed on the cluster. The same registry was planned to be used to hold the customized application supporting tools Helm Chart.For example: SonarQube.
  • Vault:Design and implementing the Azure Key Vault to hold the many secrets, keys and certificates.
  • SonarQube x SonarCloud: Evaluating the best alternative for the company based on the security/budget requirements definition.
  • Sonarqube: Designing and implementing Sonarqube Enterprise on an AKS Cluster using Azure Database for PostgreSQL instead of the PostgreSQL that comes bundled with SonarQube. This strategy is important to easy the Sonarqube version upgrade and guarantee that the Sonarqube database has the same policy as the other databases managed by the company DBAs.
  • SonarQube Helm Chart: Creating a customized Sonarqube Helm Chart to be stored on the company ACR to guaratee a more controlled version upgrade of Sonarqube.
  • ALZ - Azure Landing Zone: Evaluating the most important points addressed by the methodology and implementing those on the Terraform Config Files to provision the entire environment.
  • Terraform: Provisioning the entire designed architecture using Terraform Config Files. Based on ALZ i decided that 4 layers was enough to provion the required company resources instead of the 10 layers defined by ALZ. I used also the Azure Naming Conventions on all Config files.
The objective of this long-term project was to stabilize the application's operation, design and implement best practices in the GCP cloud.
  • Stabilize the Application: I took several initiatives with this objective, from the definition of processes and procedures for introducing changes in the productive environment to the institutionalization of observability concepts. With the correct analysis of the logs, it was possible to identify one of the biggest problems of unavailability of the application due to failure in the scalability of part of the system due to the incorrect simultaneous use of GAE Standard/Flexible.
  • Cloud Logging: I created some Logging Queries using the Google Query Language to monitor the most frequent erros on the applications and executed a knowledge transfer on the use of this features to the development area to help identifying application problems on early stages in production.
  • Organization Chart: The company's technology area did not have an adequate organizational structure to address many of the problems faced. I designed a new organizational structure and hired the leaders for the test area, database, and product owners.
  • Application warm-up: Evangelization of the development team on the importance of implementing warm-up in applications provisioned in GAP - Google App Engine to improve performance as described on the following procedure: https://cloud.google.com/appengine/docs/standard/configuring-warmup-requests?tab=node.js.
  • Connection Pool: Database connection pooling is a way to reduce the cost of opening and closing connections by maintaining a “pool” of open connections that can be passed from database operation to database operation as needed. To solve many application problems, i had to evangelize the development team about the importance of implementing the connection pools on the applications.
  • Black Lists: Most of the satisfaction surveys sent by SoluCX use the email channel and the frequency with which the company's domain appeared on the black lists was very high. To solve the problem, i implemented the best practices on the Google DNS Service to avoid the chance of the company figuring on the black list. Between the changes implemented i would highliht: SPF-sender policy framework registry implementation, DKIM -Domain Keys Identified Mail, DMARC - Domain-based Message Authentication, Reporting & Conformance and tunning Twilio - sendgrid.com.
  • GCP Best practices: After the application stabilization initiatives that i described before, i started the design of a new GCP archtecture based on the GCP best pratices and zero trust policies. I created an architecture based on environments segregated by projects and used the Shared VPC (Virtual Private Cloud) concept concentrating the critical resources on the Host Project.
  • Database Architecture: During the GCP architecture design, i also designed a new high availability Google SQLmysql database arquitecture using failover and replica reads. During this process, it was necessary to demonstrate to the entire team the advantage of redirecting queries from the main database to the replicas read, thus increasing performance during data updates.
  • Terraform (IaC): Toda a nova arquitetura foi provisionada pela configuração da Terraforn com extensa definição de módulos, uma vez que esta foi a primeira vez que a empresa trabalhou com IaC.
  • IAP Implementation: In the beggining, the company make extensive use of bastion hosts to delivery database/VM access to the developers across the country and abroad. Google has a much better approach on this case which is using IAP-Identity-Aware Proxy and SQLproxy that are safer e more productive on a day by day use. So, i decided to implement the solution on the Terraform Config Modules.
  • Mongo DB Migration: The MongoDB was previously installed on a VM on GCP-Google Compute Engine. Although the use of Mongo was not intense, it was critical for the application, considering that every access configuration for each client was configured in MongoDB. Thus, after implementing the new architecture, I started the process of migrating MongoDB from the VMs to Mongo as a Service (Mongo Atlas).
  • MySQL DB Migration: After the MongoDB migration, i started the customers database migration. The customers database was provisioned as Google SQL resources using mySQL on version 5.6. We migrated all databases from 5.6 to mySQL 8.0 at the same time partitioning the the big tables and enforcing the standard application database schema on all customers databases.