Patricia Anong

Who Am I?

I am Patricia Anong, a Baltimore based freelance solutions driven Cloud Architect and Automation Engineer with a proven track record of supporting multi-cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) with a background in Database Administration.

I have extensive experience in cloud migrations and utilizing my knowledge of DBLC and multiple Cloud Platforms to architect Highly Available, Fault Tolerant, Elastic, and Scalable environments. I have worked in hybrid Cloud and Managed Data Center Environments leveraging Multi-Cloud Architectures as well as On-Premise Data Center environments.

I enjoy taking on challenging and exciting projects that help companies maximize their cloud investments utilizing DevOps methodologies.

I currently hold the following certifications: Certified Kubernetes Administrator, Certified Kubernetes Application Developer, AWS Certified Security - Specialty, Google Cloud Certified Professional Cloud Architect, AWS Solutions Architect - Professional, AWS DevOps Engineer - Professional, Google Cloud Certified Associate Cloud Engineer, AWS Certified Solutions Architect - Associate, AWS Certified Developer - Associate, AWS Certified SysOps Administrator- Associate, Dell Boomi Developer I, and HashiCorp Certified Terraform Associate.

EXPERIENCE


DevOps ProServe

I integrated with a team in the Electric Automotive Industry to support internal teams with their AWS Infrastructure:

  • Building AWS infrastructure using YAML CloudFormation scripts:

    • Service Catalog Products (RDS, EC2, S3, and Sage Maker Notebooks) including automatic Slack Notifications via Slack Webhooks.

  • Configure OIDC integration for Azure AD.

  • Configure AWS Cloud Development Kit (CDK) scripts in Python as IaC to deploy and manage resources.

  • Create YAML CloudFormation scripts to configure Service Catalog and its products,

  • Set up Route53 records and AWS Certificate Manager (ACM) certificates for Private and Public DNS routing.

  • Automate the creation of AWS Elastic Container Clusters (ECS) using Fargate and EC2 as well as CodePipelines to deploy containers to the Clusters using CodeBuild and CodeDeploy.


Azure Enterprise Migration

I supported the cloud migration for a company in the Financial Industry that supports HealthCare Information and Data migrate the Platform into Azure from an on-premise Datacenter. The migration consisted of:

  • Solutions Architecture for On-premise to Multi-Tenant Azure Migration with Warm Disaster Recovery (DR) Location.

  • Design a fault tolerant, highly available platform that met regulatory compliance standards for the industries they service (SOC, PCI, PHI/HIPAA).

  • Design, define, and deploy AD Structure, Virtual Networks, VPN Gateway, Azure Virtual Desktops, PostgreSQL database, Azure Files, Azure Key Vault, and other Resources via Terraform, Azure DevOps, and Azure Pipelines to the new multi-tenant cloud environment.

  • Configure Highly Available and Redundant Azure AD Domain Services with multiple Replica Sets.

  • Configure and Deploy Fortigate Firewall in an Active-Active Highly Available configuration using Terraform.

  • Organize and Deploy multi-tenant Azure Active Directory Components using Terraform.

  • Create and Manage Azure App Service Environment (ASE) in an Isolated Environment with Windows and Linux App Service Containers and multiple Deployment Slots using Terraform.

  • Create Azure DevOps Pipeline YAML Files to handle automated deployments to the ASE Slots in multiple subscriptions.

  • Create and deploy Windows Virtual Desktops (VDI), workspaces, Availability Sets, Host Pools, and other necessary components using Terraform.

  • Replicate solutions architecture in a DR location using Terraform for IaC configuration ensuring Networking and Security Best Practices as well as utilizing Azure Site Recovery Services and Geo Replication for appropriate resources. 


Machine Learning Operations (MLOps)

I worked with closely with data scientists and developers to architect and automate the infrastructure and CI/CD pipeline for the machine learning (ML) applications running in Azure and Google Cloud Platform (GCP). Some of the projects I completed include:

  • Architect and Create Continuous Integration and Continuous Delivery (CI/CD) Pipelines in Azure using Azure DevOps

  • Design and Deploy Secure Environments, ensuring Networking and Security Best Practices in Azure using Terraform.

  • Architect and Deploy Azure Infrastructure components to support Kotlin Application running in Azure Kubernetes Service (AKS) using tools such as Terraform, Docker, Packer, Ansible.

  • Create a CI/CD Pipeline running on Azure DevOps to deploy to the Private AKS Cluster from a Private Azure Container Registry (ACR).

  • Replicate Architecture of Azure Infrastructure in GCP using terraform to manage and provision GKE, Cloud Functions, Cloud Build, VPC, Cloud DataStore, Google App Engine, Cloud Storage, IAP, KMS, NAT Gateway. 

  • Configure and Deploy custom Apache Airflow in GKE using Helm.

  • Create Directed Acyclic Graphs (DAGS) written in Python. 


VA Additional Presumptive Capacity Automation Services (APCAS)

As a Cloud Engineer on the Veterans Affairs Additional Presumptive Capacity Automation Services (APCAS) Project, I worked with multiple teams to automate the infrastructure and CI/CD pipeline for the machine learning (ML) applications that process veteran documents. Some of the work I completed on that project include:

  • Architect and Deploy a highly available, scalable Multi-Tier Application Infrastructure in AWS GovCloud Green Field Environment.

  • Create Infrastructure as Code (IaC) using Terraform.

  • Building AWS infrastructure with Terraform:

    • AWS Core Services (S3, IAM, VPC, EC2, Auto-Scaling Groups, ALB, CloudWatch, Parameter Store, Systems Manager, Secrets Manager, KMS)

    • AWS Serverless Services (Lambda, API Gateway)

    • AWS Codestar Services (CodePipeline, CodeBuild)

    • AWS Container Services (AWS ECS, EKS, ECR)

    • AWS Messaging Services (SQS, SNS)

    • AWS Database Services (DocumentDB, RDS)

  • Automating the deployment of Hyperscience V31 using Terraform and Ansible.


VA Veterans Intake, Conversion and Communication Services (VICCS)

As a Cloud Engineer on the Veterans Affairs Veterans Intake, Conversion and Communication Services (VICCS) Project, I worked with multiple teams to automate the infrastructure and CI/CD pipeline for the machine learning (ML) applications that automatically process all incoming mail for VBA Benefit Claims. Some of the work I completed on that project include:

  • Architect and Deploy a highly available, scalable Multi-Tier Application Infrastructure in AWS GovCloud.

  • Create Infrastructure as Code (IaC) using Terraform.

  • Building AWS infrastructure with Terraform:

    • AWS Core Services (S3, IAM, VPC, EC2, Auto-Scaling Groups, ALB, CloudWatch, Parameter Store, Systems Manager, Secrets Manager, KMS)

    • AWS Serverless Services (Lambda, API Gateway)

    • AWS Codestar Services (CodePipeline, CodeBuild)

    • AWS Container Services (AWS ECS, EKS, ECR)

    • AWS Messaging Services (SQS, SNS)

    • AWS Database Services (DocumentDB, RDS)

  • Automating the deployment and upgrade of Hyperscience using Terraform and Ansible.


VA Enterprise Mobility Management (EMM)

As a Cloud Engineer on the Veterans Affairs Enterprise Mobility Management Project, I worked with multiple teams to support all iOS and Android devices used by VA Personnel. Some of the work I completed on that project include:

  • Architect and configure highly available, scalable Multi-Tier Application Infrastructure in AWS GovCloud.

  • Migrating AirWatch Application Servers and SQL Server Databases from IBM Terremark to AWS GovCloud using a Replatforming Migration Strategy.

    • Migrating data from Terremark to AWS using AWS Snowball.

    • Rebuilding the infrastructure in AWS with highly availability, scalability, and cost optimization at the core of the replatformed architecture.

  • Create Infrastructure as Code (IaC) using CloudFormation.

  • Configuring serverless solutions using Lambda for tagging all resources including volumes, taking nightly snapshots of all Production servers based on CloudWatch Events and eventually migrating to AWS Backup when it became available in GovCloud.

  • Working with the VA Internal networking Team to ensure the security and appropriate routing for all application traffic and data using AWS services as well as third-party Load Balancing solutions (HAProxy):

    • Generating certificates for all servers to ensure appropriate routing and networking configurations within the VA Network.

    • Provision, Configure, and Manage HAProxy Servers using Terraform.

    • Provision and Configure Application Load Balancers and Target Groups using Terraform.

    • Manually set up Advanced Routing Rules based on custom HTTP headers.

    • Install and Configure HAProxy servers on EC2 for use within the Veterans Affairs (VA) Internal Network.

  • Building AWS infrastructure with CloudFormation:

    • AWS Infrastructure Scripting (CloudFormation)

    • AWS Core Services (S3, IAM, VPC, EC2, Auto-Scaling Groups, ELB, ALB, CloudWatch, Parameter Store, KMS)

    • AWS Serverless Services (Lambda)

    • AWS Messaging Services (SNS)

    • AWS Backup

  • Converting all Infrastructure from CloudFormation to Terraform 0.12 and then upgrading from v0.12 to v0.13 and v0.14

  • Upgrade all Infrastructure as Code from Terraform 0.14 to Terraform v1.x 


AWS Enterprise Migration (Replatforming)

I worked closely with developers and other stakeholders in the Educational Services Sector with an enterprise migration to AWS from a data center. The migration consisted of:

  • Modernize and containerize Java applications using Docker.

    • Upgrade Java version

    • Package upgraded Java Application using Docker

    • Store the images in ECR.

  • Automate the CI/CD of the containerized applications using CodePipeline to ECS from Gitlab using Terraform.

    • Create S3 Webhooks to replicate Gitlab repository to bucket (Gitlab integration not natively supported in Codepipeline).

  • Upgrade Terraform Code from v0.11 to v0.14

  • Integrate the use of RabbitMQ in the CI/CD Pipeline.


AWS Enterprise Migration (Re-Architecting)

I worked closely with developers and other stakeholders in the Financial Technology Sector with an enterprise migration to AWS from a data center. The migration consisted of:

  • Building CI/CD Pipeline in AWS from Legacy Systems using CodePipeline, Service Catalog, Veracode, Terraform, Docker, and JFrog Artifactory.

    • Integrating Code Analysis, Unit Testing, and Application Scans using SonarQube and Veracode into the AWS CodePipeline pipelines.

    • Building Self-Serve Infrastructure via Service Catalog using Terraform Servers to provision AWS resources.

  • Upgrading existing Terraform Configurations to Terraform v0.12 and training internal DevOps Engineers.

  • Managing a team of 8 junior DevOps Engineers as well as offshore resources.


Stelligent Systems

As a DevOps Automation Engineer at Stelligent Solutions - an AWS Premier Partner, I worked with multiple teams to help enterprises leverage the AWS platform to accelerate their software delivery and development automation efforts. Some of the projects I have completed include:

  • Building AWS infrastructure with CloudFormation and Terraform:

    • AWS Infrastructure Scripting (CloudFormation)

    • AWS Core Services (S3, IAM, VPC, EC2, Auto-Scaling Groups, ELB, CloudWatch, Parameter Store, Systems Manager, KMS)

    • AWS Code* Services (CodePipeline, CodeBuild)

    • AWS Serverless Services (Lambda, API Gateway)

    • AWS Security Services (AWS Config, GuardDuty)

    • AWS Service Catalog

    • AWS Container Services (AWS ECS, EKS, ECR)

  • Ensure Continuous Integration (CI) and Continuous Delivery (CD) by using tools such as CloudFormation, Terraform, Ansible Tower, Veracode, SonarQube, AWS Service Catalog, and Jenkins.

  • Implement DevSecOps using Inspec and config-lint to ensure compliance rules were maintained in Kubernetes.

  • Building Self-Serve Infrastructure via Service Catalog using Terraform Servers to provision AWS resources.


Fearless Solutions

As a Senior DevOps Engineer at Fearless Solutions, I worked on the Beneficiary Claims Data API and Beneficiary Claims Data Auth teams to architect and support multiple AWS environments and multiple Center for Medicare & Medicaid Services (CMS) teams, drawing on the skills and knowledge acquired from my prior experiences and AWS certifications to ensure a highly scalable, highly available, and secure infrastructure. Some of the projects I have completed include:

  • Provided technical design, implementation, and support services as well as Cloud Architectural knowledge to the Software Development team, Stakeholders, and Business Leaders.

  • Ensure Continuous Integration (CI) and Continuous Delivery (CD) by using tools such as Terraform, Ansible, Packer, and Jenkins to configure the BCDA environment.

  • Support the Authentication Team by integrating OKTA as the user management tool in the CI pipeline.

  • Implement DevSecOps using Inspec and Heimdall for Security and Compliance following NIST Guidelines.

  • Supported multiple CMS teams by creating CI/CD pipelines following best practices to adopt DevOps Methodologies.

  • Provided documentation and training to internal teams for ongoing management of the AWS Environment and CI/CD pipeline.

  • Creating Agile Security Impact Analysis (SIA) for multiple BCDA systems.


Stratus Solutions

As an Infrastructure Cloud Engineer at Stratus Solutions, I worked on the DevOps team to architect and support multiple AWS environments, drawing on the skills and knowledge acquired from my prior experiences and AWS certifications to ensure a highly scalable, highly available, and secure infrastructure. Some of the projects I have completed include:

  • Providing technical design, implementation, and support services as well as Cloud Architectural knowledge across Cloud Platforms to the Software Development team, Stakeholders, and Business Leaders.

  • Assessing AWS Environments and making cost optimization and resource improvement recommendations based on Business and Technical requirements.

  • Installing, Upgrading, Securing, Monitoring, and Administering Kubernetes clusters in AWS Commercial and GovCloud Regions.

  • Configuring Helm charts to package and deploy Kubernetes Applications easily, efficiently, accurately, and at scale.

  • Ensuring Kubernetes Cluster security by employing the use of Kube Bench and Nessus to complete security assessments and remediating all issues in a timely manner.

  • Creating CloudFormation Templates and Topology Diagrams for all supported AWS Environments.

  • Utilizing GitLab, Terraform, Packer, Docker, and CloudFormation Templates to provision, build, deploy, update, and manage new and existing Infrastructure and Applications.

  • Working closely with Software Developers to improve Continuous Integration (CI) and Continuous Delivery/Deployment (CD) pipeline in an Agile environment.


ConnectYourCare

As the Lead DBA and Cloud Architect at ConnectYourCare, I work closely with the development and applications team to innovate, manage, and solve the overall concerns with data. Some projects I have completed include:

  • Performing an in-depth analysis of current OLTP environments and providing insights and recommendations based on findings, as well as providing direction to Rackspace DBAs to implement changes based on findings.

  • Working closely with the security team to maintain SOC2 and HIPAA compliance.

  • Creating, maintaining, and documenting automation initiatives for database tasks and batch jobs.

  • Working closely with Database Developers to design and Implement a Data Warehouse for Analysis and Reporting. Including architecting the server buildout, hardware components, data structure, and database provisioning for initial implementation of a Data Warehouse Environment.

  • Performing Data Modeling and providing production support as well as determining and implementing the most cost-effective resources and ETL process to load data into the warehouse from On-premises OLTP source utilizing Data Pump, database links, and stored procedures.

  • Creating processes, procedures, policies, and standards for the database functions.


Under Armour

While at Under Armour, I worked with stakeholders and project managers to achieve project goals in an AGILE work environment. A few of the projects I completed include:

  • Successfully installing, configuring, and validating a manual disaster recovery solution utilizing Oracle Database Software on a NIX based server.

  • Successfully provisioning Oracle Enterprise Manager Cloud Control 12c on an Oracle 11g EE database and deployed the monitoring agents to to ensure OFA Compliance.

  • Installing, configuring, and managing several multi-node, multi-subnet SQL Server 2014 Failover Clusters on Windows 2012 R2 in AWS for geographically dispersed Disaster Recovery (DR) Failover utilizing AlwaysOn Availability Groups for High Availability (HA).

  • Utilizing a Hybrid Computing strategy to install and configure Primary Control-M EM servers, Control-M Job Servers, and deploy Control-M Agents in AWS and SAP HEC with redundant entities for load balancing and High Availability on premises on Windows and Linux platforms. Configuring LDAP access and launching test jobs in SQL Server, Oracle, SAP, and SAP Business Objects databases to test the installed plugins. Applying patches and fix packs on the Control-M EM and Job servers and creating documentation of the processes.

  • Installing Local Boomi atoms on windows servers in development, QA, and production. Utilizing Boomi Atomsphere to maintain and manage the local and cloud atoms. Developing integrations and deployed FTP and ETL processes for HR team from Boomi Atomsphere to both local and cloud atoms and obtainign my certification as a Dell Boomi Integration Developer.


Chaveran, Inc

During my time at Chaveran, I worked with multiple teams mostly dealing with infrastructure and the full life cycle of a database, while employing Oracle best practices. Some projects I completed include:

  • Designing and documenting the full life-cycle of a database by creating and normalizing ERD using DB Designer Fork and SQL Developer. Installing the Oracle RDBMS software and creating the database for development and testing.

  • Modeling the logical and physical design of database objects using DB Designer Fork and utilizing SQL Loader to import data into the database.

  • Upgrading and migrating Oracle EE Database from 11g FS to 12c ASM and successfully configuring OEM 12c Cloud Control with multiple agent deployments.

  • Maintaining documentation for Standard Operating Procedures, After Action Reports, configuration setup and installation and patching procedures.

  • Automating backup and database refresh jobs utilizing RMAN and DataPump based on SLA.

 

Click on any of the images to validate my certifications.