10 Common AWS S3 Mistakes

Swapnil Vishwakarma Last Updated : 09 Jan, 2023
16 min read

This article was published as a part of theΒ Data Science Blogathon.

Introduction

Are you using Amazon Web Services (AWS) Simple Storage Service (S3) to store your data and media files? If so, you’re not alone – AWS S3 is a popular choice for its scalability and reliability. However, it’s not uncommon to make common AWS S3 mistakes when working with any service, and AWS S3 is no exception. In this blog, we will explore 10 common AWS S3 mistakes and how to fix them. By understanding and avoiding these mistakes, you can save yourself time, frustration, and potential data loss. Whether you’re a new AWS S3 user or an experienced pro, this blog will provide valuable insights and solutions to help you get the most out of the service. So, if you want to avoid common AWS S3 mistakes and ensure that your data is appropriately stored and managed in AWS S3, read on!

 

Common AWS S3 Mistakes
Source – Pixabay

10 AWS S3 Mistakes You Should Avoid & How to Fix Them?

  1. Not Setting Up Versioning for Your S3 Bucket

One of the most common AWS S3 mistakes people make is not setting up versioning for their bucket. Versioning allows you to preserve, retrieve, and restore previous versions of objects in your bucket. This can be especially useful if you accidentally delete an object or want to retrieve an older version of an object.

To set up versioning for your S3 bucket, follow these steps:

  1. Open the S3 console and click on the name of the bucket that you want to enable versioning for.
  2. Click on the “Properties” tab and then click on the “Versioning” sub-tab.
  3. Click on the “Edit” button and then select the “Enable versioning” option.
  4. Click on the “Save” button to save your changes.

Here is an example of how to use the CLI to enable versioning for an S3 bucket:

aws s3api put-bucket-versioning 
    --bucket my-bucket 
    --versioning-configuration Status=Enabled

In this example, the “my-bucket” bucket would be set to have versioning enabled. This means that any new objects added to the bucket would be versioned, and previous versions of an object would be preserved when it is overwritten or deleted.

It’s important to note that once versioning is enabled for a bucket, it cannot be suspended or turned off. However, you can suspend versioning on individual objects within the bucket.

Advantages:

  • Allows you to preserve, retrieve, and restore previous versions of objects in your bucket
  • Can prevent accidental or malicious deletions of objects
  • Can enable you to retrieve older versions of objects

Disadvantages:

  • Can increase the storage cost of your S3 bucket (if you have a large number of versions of objects)
  • Can increase the complexity of managing your S3 bucket (if you have a large number of versions of objects)

 

  1. Not Using the Correct Storage Class for Your Objects

Common AWS S3 Mistakes to Avoid
Source – pexels.com

The next one on the list of common AWS S3 mistakes is not using the correct storage class for your S3 objects. S3 offers a range of storage classes, each with its unique features and pricing model. Choosing the right storage class can help you save money and improve the performance of your S3 bucket.

To choose the right storage class for your objects, consider the following factors:

  • How often do you access the objects in your bucket
  • The length of time that you want to store the objects
  • The type of data that you’re storing (e.g. standard, infrequent access, or Glacier)

Here are the different storage classes that S3 offers and when to use them:

  • Standard: This is the default storage class for S3. It’s best for frequently accessed objects and provides the highest performance.
  • Standard – Infrequent Access (SIA): This storage class is similar to Standard, but it’s slightly cheaper. It’s best for objects that are accessed less frequently, but still, need to be available quickly.
  • Glacier: This is the cheapest storage class that S3 offers. It’s best for objects that are rarely accessed and can tolerate longer retrieval times (e.g. backups or long-term archives).

To change the storage class of an object, follow these steps:

  1. Open the S3 console and click on the name of the bucket that contains the object.
  2. Click on the name of the object that you want to change the storage class for.
  3. Click on the “Actions” button and then select the “Change storage class” option.
  4. Choose the storage class that you want to use for the object and then click on the “Change” button.

Here is an example of how to use this command to change the storage class of an object:

aws s3api put-object-storage-class 
    --bucket my-bucket 
    --key my-object 
    --storage-class REDUCED_REDUNDANCY

In this example, the object named “my-object” in the “my-bucket” bucket would be changed to have the “REDUCED_REDUNDANCY” storage class.

It’s important to note that this command only works for objects stored in S3. It does not work for objects stored in other storage services offered by AWS, such as EBS or EFS.

Advantages:

  • Can help you save money on storage costs
  • Can improve the performance of your S3 bucket
  • Can allow you to choose the right storage class for different types of objects

Disadvantages:

  • Can require additional effort to set up and manage different storage classes
  • This can make it more difficult to access objects that are stored in the S3 Glacier storage class (due to longer retrieval times)

 

  1. Not Setting Up Access Control for Your S3 Bucket

The next one on the list of AWS S3 mistakes that people often make with AWS S3 is not setting up proper access control for their bucket. Access control allows you to control who can access the objects in your bucket and what actions they can perform on those objects. This is important for security and compliance reasons.

To set up access control for your S3 bucket, follow these steps:

  1. Open the S3 console and click on the name of the bucket that you want to set up access control.
  2. Click on the “Permissions” tab and then click on the “Bucket Policy” sub-tab.
  3. Click on the “Edit” button and then enter the policy that you want to use for the bucket.

Here’s an example bucket policy that allows read-only access to the objects in the bucket:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowPublicRead",
      "Effect": "Allow",
      "Principal": "*",
      "Action": ["s3:GetObject"],
      "Resource": ["arn:aws:s3:::my-bucket/*"]
    }
  ]
}

You can also set up access control using the AWS CLI. To do this, use the following command:

Here is an example of how to use theΒ AWS CLI to grant read and write access to a specific AWS account for an S3 bucket:

aws s3api put-bucket-acl 
    --bucket my-bucket 
    --grant-read 'id=1234abcd1234abcd1234abcd' 
    --grant-write 'id=1234abcd1234abcd1234abcd'

In this example, the “my-bucket” bucket would be set to allow the AWS account with the ID “1234abcd1234abcd1234abcd” to read and write objects in the bucket. You can replace this ID with the ID of the AWS account to you want to grant access.

It’s important to note that the aws s3api put-bucket-acl command replaces the existing access control policy for the bucket with the one specified in the command. If you want to add additional grants to the existing policy, you can use the aws s3api put-bucket-acl command multiple times, or you can use the aws s3api get-bucket-acl command to retrieve the existing policy and modify it before using the aws s3api put-bucket-acl command to apply the updated policy.

Advantages:

  • Can improve the security of your S3 bucket
  • Can prevent unauthorized access to your objects
  • Can enable you to control who can access your objects and what actions they can perform on those objects

Disadvantages:

  • Can require additional effort to set up and manage access control policies
  • Can make it more difficult for authorized users to access your objects (if you have overly restrictive policies)

 

  1. Not Enabling MFA Delete for Your S3 Bucket

The next one on the list of common AWS S3 mistakes is not enabling MFA delete for your S3 bucket. MFA delete is a security feature that requires users to provide a valid MFA code before they can permanently delete objects from your bucket. This can help prevent accidental or malicious deletions of objects in your bucket.

To enable MFA delete for your S3 bucket, follow these steps:

  1. Open the S3 console and click on the name of the bucket that you want to enable MFA delete for.
  2. Click on the “Permissions” tab and then click on the “MFA Delete” sub-tab.
  3. Click on the “Edit” button and then select the “Enable MFA Delete” option.
  4. Click on the “Save” button to save your changes.

You can also enable MFA delete using the AWS CLI. To do this, use the following command:

aws s3api put-bucket-versioning 
    --bucket my-bucket 
    --versioning-configuration Status=Enabled,MFADelete=Enabled

In this example, the “my-bucket” bucket would be set to have versioning enabled and MFA delete enabled. This means that any new objects added to the bucket would be versioned, and previous versions of an object would be preserved when it is overwritten or deleted. It also means that a valid MFA token would be required in order to permanently delete an object from the bucket.

It’s important to note that MFA delete can only be enabled if versioning is also enabled for the bucket. Additionally, once MFA delete is enabled for a bucket, it cannot be suspended or turned off.

Advantages:

  • Can prevent accidental or malicious deletions of objects in your S3 bucket
  • Can improve the security of your S3 bucket
  • Can enable you to require additional authentication before objects can be deleted

Disadvantages:

  • Can require additional effort to set up and manage MFA delete
  • Can make it more difficult for authorized users to delete objects in your S3 bucket (if they don’t have access to an MFA device)

 

  1. Not Setting Up Lifecycle Rules for your S3 Bucket

The next one on the list of common AWS S3 mistakes people often make is not setting up lifecycle rules for their bucket. Lifecycle rules allow you to automate the management of the objects in your bucket. For example, you can use lifecycle rules to move objects to a different storage class after a certain number of days or to permanently delete objects after a certain number of days.

To set up lifecycle rules for your S3 bucket, follow these steps:

  1. Open the S3 console and click on the name of the bucket that you want to set up lifecycle rules.
  2. Click on the “Management” tab and then click on the “Lifecycle” sub-tab.
  3. Click on the “Add rule” button and then enter the details of the rule that you want to create.

Here’s an example lifecycle rule that moves objects to the SIA storage class after 30 days and permanently deletes them after 90 days:

{
  "Expiration": {
    "Days": 90
  },
  "ID": "MoveToSIA",
  "Prefix": "",
  "Status": "Enabled",
  "Transitions": [
    {
      "Days": 30,
      "StorageClass": "STANDARD_IA"
    }
  ]
}

Here is an example of how to use the AWS CLI to set up a lifecycle rule that transitions objects to the “GLACIER” storage class after 30 days, and permanently deletes them after 90 days:

aws s3api put-bucket-lifecycle-configuration 
    --bucket my-bucket 
    --lifecycle-configuration '{
        "Rules": [
            {
                "ID": "Transition to GLACIER",
                "Prefix": "",
                "Status": "Enabled",
                "Transitions": [
                    {
                        "Days": 30,
                        "StorageClass": "GLACIER"
                    }
                ]
            },
            {
                "ID": "Delete after 90 days",
                "Prefix": "",
                "Status": "Enabled",
                "Expiration": {
                    "Days": 90
                }
            }
        ]
    }'

In this example, the “my-bucket” bucket would be set to have two lifecycle rules. The first rule transitions any objects in the bucket to the “GLACIER” storage class after 30 days. The second rule permanently deletes any objects in the bucket after 90 days.

It’s important to note that the aws s3api put-bucket-lifecycle-configuration command replaces the existing lifecycle configuration for the bucket with the one specified in the command. If you want to add additional rules to the existing configuration, you can use the aws s3api put-bucket-lifecycle-configuration command multiple times, or you can use the aws s3api get-bucket-lifecycle-configuration command to retrieve the existing configuration and modify it before using theaws s3api get-bucket-lifecycle-configuration command to retrieve the existing configuration, add the new rules to it, and then use the aws s3api put-bucket-lifecycle-configuration command to apply the updated configuration.

Advantages:

  • Can automate the management of the objects in your S3 bucket
  • Can help you save money on storage costs (by moving objects to cheaper storage classes or deleting them after a certain number of days)
  • Can improve the performance of your S3 bucket (by moving frequently accessed objects to faster storage classes)

Disadvantages:

  • Not setting up lifecycle rules can result in unnecessary storage costs if data is not accessed frequently and is retained indefinitely.
  • Not setting up lifecycle rules can result in increased storage space usage and potential storage capacity limitations.
  • Not setting up lifecycle rules can result in data loss if data is not backed up or transitioned to a different storage class before it is deleted due to expiration.

 

  1. Not Encrypting Your S3 Objects

Common AWS S3 Mistakes to Avoid
Source – pexels.com

The next one on the list of common AWS S3 mistakes is not encrypting the objects in your S3 bucket. Encryption is important for protecting the sensitive data that you store in your S3 bucket. S3 offers several options for encrypting your objects, including server-side encryption and client-side encryption.

To encrypt your S3 objects using server-side encryption, follow these steps:

  1. Open the S3 console and click on the name of the bucket that you want to encrypt.
  2. Click on the “Properties” tab and then click on the “Server-Side Encryption” sub-tab.
  3. Click on the “Edit” button and then select the encryption option that you want to use (e.g. AES-256).
  4. Click on the “Save” button to save your changes.

Here is an example of how to use this command to encrypt an object using the AES256 encryption algorithm:

aws s3api put-object 
    --bucket my-bucket 
    --key my-object 
    --server-side-encryption AES256 
    --body my-file.txt

n this example, the “my-object” object in the “my-bucket” bucket would be encrypted using the AES256 algorithm. The file “my-file.txt” would be used as the content of the object.

It’s important to note that the aws s3api put-object command only encrypts the object when it is uploaded to S3. If you want to encrypt an existing object, you can use the aws s3api copy-object command to copy the object to itself and specify the encryption options in the command.

Advantages:

  • Can protect the sensitive data that you store in your S3 bucket
  • Can comply with data privacy regulations (e.g. GDPR, HIPAA)
  • Can improve the security of your S3 bucket

Disadvantages:

  • Can require additional effort to set up and manage encryption for your S3 objects
  • Can reduce the performance of your S3 bucket (if you use client-side encryption)

 

  1. Not Setting Up Cross-region Replication for Your S3 Bucket

The next one on the list of common AWS S3 mistakes people often make is not setting up cross-region replication for their bucket. Cross-region replication allows you to automatically replicate the objects in your S3 bucket to a different region. This can provide additional redundancy and improve the availability of your data.

To set up cross-region replication for your S3 bucket, follow these steps:

  1. Open the S3 console and click on the name of the bucket that you want to set up replication for.
  2. Click on the “Management” tab and then click on the “Replication” sub-tab.
  3. Click on the “Add rule” button and then enter the details of the replication rule that you want to create.

Here’s an example replication rule that replicates objects from the “my-bucket” bucket in the “us-east-1” region to the “my-bucket-replica” bucket in the “eu-west-1” region:

{
  "Destination": {
    "Bucket": "arn:aws:s3:::my-bucket-replica",
    "StorageClass": "STANDARD"
  },
  "ID": "MyReplicationRule",
  "Prefix": "",
  "Status": "Enabled"
}

You can also set up cross-region replication using the AWS CLI. To do this, use the following command:

aws s3api create-replication-configuration 
    --replication-configuration '{
        "Role": "arn:aws:iam::123456789012:role/my-replication-role",
        "Rules": [
            {
                "ID": "Replicate all objects",
                "Prefix": "",
                "Status": "Enabled",
                "Destination": {
                    "Bucket": "arn:aws:s3:::my-destination-bucket"
                }
            }
        ]
    }'

In this example, the “my-bucket” bucket would be set up to replicate all objects to the “my-destination-bucket” bucket in another region. The replication would be performed using the IAM role specified in the “Role” field.

It’s important to note that the aws s3api create-replication-configurationΒ the command only sets up the replication configuration for the source bucket. It does not automatically enable replication for existing objects in the bucket. To replicate existing objects, you can use the aws s3api put-bucket-replication command to enable replication for the bucket, and then use the aws s3 sync command to copy the objects to the destination bucket.

Advantages:

  • Can provide additional redundancy for your S3 bucket
  • Can improve the availability of your data
  • Can enable you to use S3 buckets in different regions for different purposes (e.g. one region for primary storage and another region for disaster recovery)

Disadvantages:

  • Can require additional effort to set up and manage cross-region replication
  • Can increase the storage cost of your S3 bucket (if you replicate objects to multiple regions)
  • Can reduce the performance of your S3 bucket (if you replicate objects to regions that are far away)

 

  1. Not Using S3 Transfer Acceleration

AWS S3 Mistake - Not using S3 Transfer Acceleration
Source – Unsplash

The next one on the list of common AWS S3 mistakes is not using S3 Transfer Acceleration. S3 Transfer Acceleration uses the Amazon CloudFront edge network to accelerate the transfer of large objects to and from your S3 bucket. This can significantly improve the performance of your S3 bucket, especially if you have users located in different regions.

To enable S3 Transfer Acceleration for your S3 bucket, follow these steps:

  1. Open the S3 console and click on the name of the bucket that you want to enable transfer acceleration.
  2. Click on the “Properties” tab and then click on the “Transfer Acceleration” sub-tab.
  3. Click on the “Edit” button and then select the “Enable Transfer Acceleration” option.
  4. Click on the “Save” button to save your changes.

You can also enable S3 Transfer Acceleration using the AWS CLI. To do this, use the following command:

aws s3api put-bucket-accelerate-configuration 
    --bucket my-bucket 
    --accelerate-configuration Status=Enabled,AccelerationRate=Full

In this example, the “my-bucket” bucket would be set to use S3 Transfer Acceleration at the maximum rate (i.e., “Full”). This means that objects in the bucket would be transferred to and from the bucket using the Amazon S3 Edge network, which can provide faster transfer speeds than the standard Internet.

It’s important to note that S3 Transfer Acceleration is only available for objects in the S3 Standard and S3 Intelligent-Tiering storage classes. It is not available for objects in other storage classes, such as S3 Glacier or S3 One Zone-Infrequent Access.

Advantages:

  • Can significantly improve the performance of transferring large objects to and from your S3 bucket
  • Can enable you to transfer objects to and from your S3 bucket faster, even if you have users located in different regions
  • Can reduce the time and cost of transferring large objects to and from your S3 bucket

Disadvantages:

  • Can require additional effort to set up and enable S3 Transfer Acceleration
  • Can increase the cost of using S3 (if you use S3 Transfer Acceleration frequently)
  • Can only be used for transferring objects to and from your S3 bucket (not for other S3 operations)

 

  1. Not Using S3 Intelligent-Tiering

The next one on the list of common AWS S3 mistakes people often make is not using S3 Intelligent-Tiering. S3 Intelligent-Tiering is a new storage class that automatically moves your objects between the S3 Standard and S3 SIA storage classes based on their access patterns. This can help you save money on storage costs without sacrificing performance.

To enable S3 Intelligent-Tiering for your S3 bucket, follow these steps:

  1. Open the S3 console and click on the name of the bucket that you want to enable Intelligent-Tiering for.
  2. Click on the “Management” tab and then click on the “Intelligent-Tiering” sub-tab.
  3. Click on the “Edit” button and then select the “Enable Intelligent-Tiering” option.
  4. Click on the “Save” button to save your changes.

You can also enable S3 Intelligent Tiering using the AWS CLI. To do this, use the following command:

aws s3api put-bucket-intelligent-tiering-configuration 
    --bucket my-bucket 
    --intelligent-tiering-configuration '{
        "Id": "MyIntelligentTieringConfiguration",
        "Filter": {
            "Prefix": "",
            "Tag": {
                "Key": "",
                "Value": ""
            }
        },
        "Status": "Enabled",
        "Tierings": [
            {
                "AccessTier": "Standard"
            }
        ]
    }'

In this example, the “my-bucket” bucket would be set to use S3 Intelligent Tiering with the default configuration. This means that objects in the bucket would be automatically moved between the “Standard” and “Intelligent-Tiering” access tiers based on their access patterns, without any additional configuration required.

It’s important to note that S3 Intelligent Tiering is only available for objects in the S3 Standard storage class. It is not available for objects in other storage classes, such as S3 Glacier or S3 One Zone-Infrequent Access.

Advantages:

  • Can automatically move your objects between the S3 Standard and S3 SIA storage classes based on their access patterns
  • Can help you save money on storage costs without sacrificing performance
  • Can enable you to use the right storage class for different types of objects

Disadvantages:

  • Can require additional effort to set up and enable S3 Intelligent-Tiering
  • Can increase the complexity of managing your S3 bucket (if you have a large number of objects stored in different storage classes)
  • Can only be used for objects stored in the S3 Standard and S3 SIA storage classes (not for objects stored in other storage classes)

 

  1. Not Using S3 Select and S3 Glacier Select

The final mistake that we’ll cover is not using S3 Select and S3 Glacier Select. S3 Select allows you to quickly and efficiently retrieve only the data that you need from your S3 objects. This can improve performance and reduce the cost of your S3 operations. S3 Glacier Select is similar to S3 Select, but it’s optimized for use with the S3 Glacier storage class.

To use S3 Select, you need to enable it for your S3 bucket and then use SQL-like syntax to query the objects in your bucket. Here’s an example query that retrieves only the “name” and “age” fields from the objects in the “my-bucket” bucket:

SELECT name, age
FROM s3object s
WHERE s.bucket = 'my-bucket'

You can use the AWS CLI to run S3 Select queries. To do this, use the following command:

aws s3api select-object-content 
    --bucket my-bucket 
    --expression "SELECT name, age FROM S3Object s WHERE s.bucket = 'my-bucket'" 
    --input-serialization '{"JSON": {}}' 
    --output-serialization '{"JSON": {}}'

In this example, the objects in the “my-bucket” bucket would be queried using the SQL expression “SELECT name, age FROM S3Object s WHERE s.bucket = ‘my-bucket'”. The objects would be assumed to be in JSON format, and the query results would also be returned as JSON objects.

It’s important to note that the aws s3api select-object-contentΒ the command only retrieves the query results, it does not modify the objects in any way. If you want to modify the objects based on the query results, you can use the aws s3api put-object command to update the objects with the modified data.

To use S3 Glacier Select, you first need to move the objects that you want to query to the S3 Glacier storage class. Then, you can use the same SQL-like syntax to query the objects in your bucket. Here’s an example query that retrieves only the “name” and “age” fields from the objects in the “my-bucket” bucket:

SELECT name, age
FROM s3object s
WHERE s.bucket = 'my-bucket'

You can use the AWS CLI to run S3 Glacier Select queries. To do this, use the following command:

# Move the objects to the S3 Glacier storage class
aws s3api put-object 
    --bucket my-bucket 
    --key my-object 
    --storage-class GLACIER
# Query the objects using S3 Glacier Select
aws s3api select-object-content 
    --bucket my-bucket 
    --expression "SELECT name, age FROM S3Object s WHERE s.bucket = 'my-bucket'" 
    --input-serialization '{"JSON": {}}' 
    --output-serialization '{"JSON": {}}'

In this example, the “my-object” object in the “my-bucket” bucket would be moved to the S3 Glacier storage class using the aws s3api put-object command. Then, the objects in the bucket would be queried using the SQL expression “SELECT name, age FROM S3Object s WHERE s.bucket = ‘my-bucket'” using the aws s3api select-object-content command. The objects would be assumed to be in JSON format, and the query results would also be returned as a JSON objects.

It’s important to note that S3 Glacier Select is only available for objects in the S3 Glacier storage class. It is not available for objects in other storage classes,Β such as S3 Standard or S3 Intelligent Tiering.

Advantages:

  • Can quickly and efficiently retrieve only the data that you need from your S3 objects
  • Can improve the performance and reduce the cost of your S3 operations
  • Can enable you to query your S3 objects using a SQL-like syntax

Disadvantages:

  • Can require additional effort to set up and use S3 Select and S3 Glacier Select
  • Can only be used for objects stored in the S3 Standard, S3 SIA, and S3 Glacier storage classes (not for objects stored in other storage classes)
  • Can only be used for objects that are in a supported format (e.g. CSV, JSON, Parquet)

Overall, AWS S3 offers many powerful features that can help you improve the performance, security, and cost-efficiency of your S3 bucket. However, it’s important to avoid common mistakes and use best practices when working with S3. By following the tips and tricks in this blog post, you can avoid and fix the most common mistakes when working with S3.

 

Conclusion

In conclusion, AWS S3 is a powerful and feature-rich cloud storage service. However, it’s important to avoid common mistakes when working with S3. In this blog post, we covered 10 common AWS S3 mistakes and how to fix them. By following these best practices, you can improve the performance, security, and cost-efficiency of your S3 bucket.

Here are the key takeaways from this blog post:

  • Set up versioning for your S3 bucket to preserve, retrieve, and restore previous versions of objects.
  • Use the correct storage class for your S3 objects to save money on storage costs and improve performance.
  • Set up access control for your S3 bucket to improve security and control who can access your objects.
  • Enable MFA delete for your S3 bucket to prevent accidental or malicious deletions of objects.
  • Set up lifecycle rules for your S3 bucket to automate the management of objects and save money on storage costs.
  • Encrypt your S3 objects to protect sensitive data and comply with data privacy regulations.
  • Set up cross-region replication for your S3 bucket to provide additional redundancy and improve availability.
  • Use S3 Transfer Acceleration to improve the performance of transferring large objects to and from your S3 bucket.
  • Use S3 Intelligent Tiering to automatically move your objects between storage classes based on their access patterns.
  • Use S3 Select and S3 Glacier Select to quickly and efficiently retrieve only the data that you need from your S3 objects.

By following these best practices, you can avoid common AWS S3 mistakes and get the most out of AWS S3.

Thanks for Reading!πŸ€—

If you liked this blog, consider following me on Analytics Vidhya, Medium, GitHub, and LinkedIn.

 

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Hello there! πŸ‘‹πŸ» My name is Swapnil Vishwakarma, and I'm delighted to meet you! πŸ„β€β™‚οΈ

I've had some fantastic experiences in my journey so far! I worked as a Data Science Intern at a start-up called Data Glacier, where I had the opportunity to delve into the fascinating world of data. I also had the chance to be a Python Developer Intern at Infigon Futures, where I honed my programming skills. Additionally, I worked as a research assistant at my college, focusing on exciting applications of Artificial Intelligence. βš—οΈπŸ‘¨β€πŸ”¬

During the lockdown, I discovered my passion for Machine Learning, and I eagerly pursued a course on Machine Learning offered by Stanford University through Coursera. Completing that course empowered me to apply my newfound knowledge in real-world settings through internships. Currently, I'm proud to be an AWS Community Builder, where I actively engage with the AWS community, share knowledge, and stay up to date with the latest advancements in cloud computing.

Aside from my professional endeavors, I have a few hobbies that bring me joy. I love swaying to the beats of Punjabi songs, as they uplift my spirits and fill me with energy! 🎡 I also find solace in sketching and enjoy immersing myself in captivating books, although I wouldn't consider myself a bookworm. πŸ›

Feel free to ask me anything or engage in a friendly conversation! I'm here to assist you in English. 😊

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details