NTT DATA - Delivered Projects

About the company

NTT DATA is part of NTT Group, a trusted global innovator of IT and business services headquartered in Tokyo. NTT DATA help clients transform through consulting, industry solutions, business process services, IT modernization and managed services. The company has offices in over 50 countries, had a Consolidated Net Sales US$ 23.8 B (FY23) and 195.100 Employees on the same fiscal year.

Main customer projects delivered

Projeto 4X - The main purpose of this project is increasing by four times the overall productivity of IT applications delivery. I worked by implementing and maitaining a broad range of automation tools among which i would point out the following:
  • GitOps: Defining and maintaining the deployment process around code repositories on GitLab applying the best practices on structuring the Application and deployment manifests repository.
  • GitLab: Structuring CI/CD projects, developing and maitaining Pipelines, planning the overall organization groups/subgroups, branch strategy definition, issues and milestones, use of GitLab APIs on automation tools, authentication strategy,
  • ArgoCD: Intallation, configuration and integration with GitLab and Backstage.
  • Kustomize: I used Kustomize as a step on the pipelines to adjust the base/patchs projects manifests, created by backstage, before being submited to be deployed by ArgoCD on the kubernetes clusters.
  • Dockerfile: Defining and implementing the best practices for Docker image provisioning considering size, performance and security of the images created.
  • AWS: Provisioning, maintainning and troubleshooting EKS Clusters.
  • Backstage: Creating and maintaining the Templates used by Backstage to provision Java, Node 18 microservices. Integration with GitLab to provision required variables on the application projects provioned using the HTTP request scaffolder actions.
Projeto Safira - The main purpose of this project is implementing observability on RedHat OpenShift (On Premisses), process and tools definition for the kubernetes environment.
  • Assessment: Assessment on the current OpenShift cluster and tools infrastructure.
  • Architecture: I designed i new archtecture proposal to address some important improvement oportunities on the customer infrastructure. Among the proposed changes i woud point out the following:
    • Tools Clusters: Provisioning three cluster to hold the tolls related to Observability, Repositories and GitOps. Each cluster was supposed to have similar autoscaling and configurations characteristcs designed accordingly to the it's category.
    • Tools Provisioning: Changing the current tools instllation to provision the database used by each tool on the company database infrastructure. This is necessary to guarantee that the tools databases are in accordance with the company policies and easy the upgrade process of the tools itself.
  • Observability: Provisioning and trainning the team on the Prometheus/Grafana observability tools.
  • Istio: Configuring the Service Mesh on the cluster on specific applications namespaces to monitor the four golden signals: latency, traffic, errors, and saturation.
  • Trunk Based Development: Proposing a new developemt strategy and implementing it on GitLab using a new Branch Strategy, Issues, Labels and Milestones.
I worked by implementing and maitaining a broad range of automation tools among which i would point out the following:
  • Assessment: Assessment of the current Rancher archtecture based on three on premisses datacenter and proposing chnages. One of the main changes proposed was moving the Rancher Manager from on premisses to a public cloud (GCP) on a high avalability cluster.
  • Logging: Configuring the Logging Operator on the clusters, namespace inventory to decide where to enable the logging sidecar. Knowledge tranfer about the best practices to implement logging on the microservices.
  • GKE Secrets: Design and imlementations of secrets best practices using Vaults (GCP Secret Manager) and Service Accounts.
  • Service Mesh: Installaion on the clusters and enabling on the required namespaces.
I worked on the institutionalization project of the CI/CD pipelines.
  • Observability: I participated in the implementation of observability policies and tools chosen by the bank with their integration to applications. I used the following tools: Prometheus, Grafana, Splunk and AppDynamics..
  • AWS: Training the development team in the use of pipelines developed on GitHub Actions. Pipeline Troubleshooting with extensive use of Terraform and Cloudformation.