![Profile](https://weekday-user-pictures.s3.ap-south-1.amazonaws.com/profile-images/shrestha-almina.jpg)
Almina Shrestha
Cloud Data Engineer
5.9
Years of Experience
Education
Companies
wells fargo, tmobile, lama insurance agency, new prestige automotive care, goodwill finance company, goodwill finance company
Reach out to Almina Shrestha via Email, InMail and SMS drip
by installing Chrome extension
Almina's contact details
Email (Verified)
almXXXXXXXXXXXXXXXXXXXXXXXXXom
Experience
2021 - Present
wells fargo
Cloud Data Engineer
Developing streaming and batch applications using Apache Spark, AWS Glue, Python, and Apache Airflow, building BI dashboards, Restful APIs, and Grafana dashboards, and implementing CI/CD pipelines using Jenkins. Working with multiple programming languages, such as Python and Scala, to create modern data pipelines. Developing optimized ETL scripts in AWS Glue, building ingestion scripts for AWS RDS and S3, and implementing end-to-end CI/CD pipelines using Jenkins. Implementing, automating, and maintaining large-scale enterprise ETL processes for a global client base using Apache Spark ecosystem and lambda architecture. Utilizing Apache Spark, Spark Components, and Sqoop for big data processing, building ETL pipelines for Snowflake ingestion, and working on Spark Streaming and Apache Kafka for live stream data. Working extensively with various AWS services such as EC2, S3, VPC, RDS, DynamoDB, and Redshift, and deployed LAMP-based applications in the AWS environment with provisioned S3 buckets for backup. Collaborated with the team to migrate to AWS, implemented AWS Redshift, and designed AWS virtual servers using Ansible roles for web application deployment, including Tableau integration for data visualizations. Utilized Kubernetes to deploy, scale, load balance, and manage Docker containers with multiple namespace versions and provisioned monitoring and logging systems on AWS using tools like SPLUNK, ELK, and Sense. Managed AWS infrastructure using automation and configuration management tools like Chef and Ansible and used Amazon Route53 for DNS zone management and public DNS naming of elastic load balancers. Developed Python-based ingestion scripts for daily data ingestion into AWS RDS and S3, and automated daily jobs using Apache Airflow to optimize the process. Loaded real-time streaming data using Event hubs, built BI dashboards in Tableau for metric reporting, and developed Grafana dashboards for daily metric automation and alerting.
2018 - 2021
tmobile
Data Engineer
Development of high-impact big data solutions using Apache Spark, Hadoop, Splunk, Kafka, Hive, Pig, and SQL to process and analyze large volumes of data in real-time. Processed and analyzed large volumes of data in real time for pattern matching and predictive modeling. Developed reports and dashboards in Splunk using a machine learning toolkit to detect log clustering models. Optimize Kafka to handle massive volumes of logs and write Lambda functions to read files from S3 and compare missing records from a database. Maintained the data pipeline upline while ingesting streaming and transactional data across different primary sources using Spark and Redshift. Joined various tables using Spark and Python and ran analytics on top of them, and actively participated in various upgrades and troubleshooting activities across the enterprise. Involved in performance troubleshooting and migration to cloud-based data integration. Provide insights for debugging system integrations by developing real-time data pipelines. Create portable data pipelines to provide a standard for Machine Learning. Working with the client to understand their business needs and translating those needs into actionable reports in Tableau. Support continuous process automation of data ingestion and pipeline workflow by developing best practice documentation with approaches. Transform data for building visualizations by determining and documenting the best methods/ways. Craft new architectural options and analyses around Snowflake Data Warehouse. Implement test scripts to support test-driven development and continuous integration. Use Spark for parallel data processing and improved performance using various Spark functions, with SQL performing query execution for big data workloads. Create documentation guides for modern automated data pipelines using Cloud-based tools.
2017 - 2018
lama insurance agency
Claims Data Analyst
Analyzed and interpreted claims data to support decision-making, improve operational efficiency, and enhance customer satisfaction. Collaborated with various stakeholders within the company to identify trends, uncover insights, and provide data-driven recommendations to optimize the claims process. Collected, validated, and organized claims data from various sources, ensuring data integrity and accuracy. Established and maintained data management processes, including storage, retrieval, and archiving. Developed and maintained reports, dashboards, and visualizations to communicate insights to stakeholders, including management and other departments. Ensured that all data analysis and reporting activities adhered to relevant regulatory guidelines and company policies. Maintained a thorough understanding of data privacy regulations and implemented appropriate safeguards to protect sensitive information.
2015 - 2017
new prestige automotive care
Administrative Officer/ Accountant
2013 - 2015
goodwill finance company
Supervisor
2010 - 2013
goodwill finance company
Assistant Supervisor
Experience