Serverless Framework with AWS & Node.JS..
- Technologies Used :+ And lot more..Aditya Hajare (Linkedin).
WIP (Work In Progress)!
- Theory
- Architecture Patterns - Multi Tier
- Architecture Patterns - Microservices
- Architecture Patterns - Multi Provider Serverless
- AWS Lambda Limits
- DynamoDB
- AWS Step Functions
- AWS SAM
- CICD 101
- Serverless Security Best Practices 101
- Best Practices 101 - AWS Lambda
- Best Practices 101 - AWS API Gateway
- Best Practices 101 - DynamoDB
- Best Practices 101 - Step Functions
- Setup And Workflow 101
- New Project Setup In Pre Configured Environment 101
- Installing Serverless
- Configuring AWS Credentials For Serverless
- Create NodeJS Serverless Service
- Invoke Lambda Function Locally
- Event - Passing Data To Lambda Function
- Serverless Offline
- NPM Run Serverless Project Locally
- Deploy Serverless Service
- Setup Serverless DynamoDB Local
- Securing APIs
- AWS CLI Handy Commands
- Common Issues
Open-sourced software licensed under the MIT license.
- Every AWS account comes with a default
VPC (Virtual Private Cloud). - At the moment,
AWS Lambda Functioncan run upto a maximum of15 Minutes. - Returning
HTTPresponses fromAWS Lambdaallows us to integrate them withLambda Proxy IntegrationforAPI Gateway. - A
Step Functioncan run upto a maximum period of1 Year. Step Functionsallows us to combine differentLambda Functionsto buildServerless ApplicationsandMicroservices.- There could be different reasons why you may want to restrict your
Lambda Functionto run within a givenVPC. For e.g.- You may have an
Amazon RDSinstance running onEC2inside yourVPCand you want to connect to that instance throughLambdawithout exposing it to outside world. In that case, yourLambda Functionmust run inside thatVPC. - When a
Lambda Functionis attached to anyVPC, it automatically loses access to the internet. Unless ofcourse you open aPorton yourVPC Security Groupto allowOutbound Connections. - While attaching
Lambda FunctiontoVPC, we must select at least 2Subnets. Although we can choose moreSubnetsif we like. - When we are using
Serverless Framework, all this including assigning necessary permissions etc. is taken care of automatically by theServerless Framework.
- You may have an
Tagsare useful for organising and tracking our billing.Serverless Computingis a cloud computing execution model in which the cloud provider dynamically manages the allocation of infrastructure resources. So we don't have to worry about managing the servers or any of the infrastructure.AWS Lambdais anEvent Drivenserverless computing platform or aCompute Serviceprovided by AWS.- The code that we run on
AWS Lambdais called aLambda Function. Lambda Functionexecutes whenever it is triggered by a pre-configuredEvent Source.Lambda Functionscan be triggered by numerous event sources like:- API Gateway.
- S3 File Uploads.
- Changes to
DynamoDBtable data. CloudWatchevents.SNSNotifications.- Third Party APIs.
IoT Devices.- And so on..
Lambda Functionsrun inContainerized Environments.- We are charged only for the time our
Lambda Functionsare executing. - No charge for
Idle Time. - Billing is done in increments of
100 msof theCompute Time. AWS Lambdauses decoupledPermissions Model.AWS Lambdasupports 2Invocation Types:- Synchronous.
- Asynchronous.
Invocation Typeof AWS Lambda depends on theEvent Source. For e.g.API GatewayorCognitoevent isSynchronous.S3 Eventis alwaysAsynchronous.
pathParametersandqueryStringParametersare the pre-defined attributes ofAPI Gateway AWS Proxy Event.AWS API Gatewayexpects Lambda function to returnwell formed http responseinstead of just the data or just the response body. At the bare-minimum, our response must havestatusCodeandbody. For e.g.{ "statusCode" : 200, "body": { "message": "Hello Aditya" } }- Typical code to build above response:
return { statusCode: 200, body: JSON.stringify({message: "Hello Aditya"}); }
- Lambda Versioning:
- When we don't explicitely create an user version, Lambda will use the
$LATESTversion. - The latest version is always denoted by
$LATEST. - The last edited version is always marked as
$LATESTone.
- When we don't explicitely create an user version, Lambda will use the
- (Without using Lambda Aliases) How to use different version of Lambda Function in API Gateway? Bad Way!
- Under AWS Console, go to
API Gateway. - Click on
Request (GET/POST/PUT/PATCH/DELETE)underResource. - Click on
Integration Request. - Configure
Lambda Functionsetting with a value of version seperated by colon. - Re-deploy the API.
- For e.g.
// Lambda Function name: adiTest // Available Lambda Function versions: v1, v2, v3 ..etc. // To use v2 for API Gateway GET Request, set Lambda Function value as below: { "Lambda Function": "adiTest:2" }
- Under AWS Console, go to
- Need for Lambda Aliases:
- Without
Lambda Aliases, whenever we publish a newLambda Version, we will manually have to edit API Gateway to use newLambda Versionand then republish the API (Refer to above steps). - Everytime we publish a new
Lambda Version,API Gatewayshould automatically pick up the change without we having to re-deploy the API.Lambda Aliaseshelps us achive this.
- Without
- Lambda Aliases:
- It's a good practice to create 1
Lambda AliasperEnvironment. For e.g. We could have aliases for dev, production, stage etc. environments. - While configuring
Lambda Alias, we can useAdditional Versionsetting forSplit Testing. Split Testingallows us to split user traffic between multipleLambda Versions.- To use
Lambda AliasinAPI Gateway, we simply have to replaceVersion Number(Seperated by colon) underLambda Functionsetting (inAPI Gatewaysettings), with anAlias. - Re-deploy the API.
- For e.g.
// Lambda Function name: adiTest // Available Lambda Function versions: v1, v2, v3 ..etc. // Available Lambda Function aliases: dev, stage, prod ..etc. // Aliases are pointing to following Lambda versions: { "dev": "v1", "stage": "v2", "prod": "$LATEST" } // To use v2 for API Gateway GET Request, set Lambda Function value as below: { "Lambda Function": "adiTest:stage" }
- It's a good practice to create 1
- Stage Variables in API Gateway:
- Everytime we make changes to
API Gateway, we don't want to update theAlias Namein everyLambda Functionbefore deploying the corrospondingStage. To address this challenge, we can make use of what is called asStage Variables in API Gateway. Stage Variablescan be used for various puposes like:- Choosing backend database tables based on environment.
- For dynamically choosing
Lambda Aliascorrosponding to the currentStage. - Or any other configuration.
Stage Variablesare available insidecontextobject ofLambda Function.- Since
Stage Variablesare available insidecontextobject, we can also use them inBody Mapping Templates. Stage Variablescan be used as follows:- Inside
API Gateway Resource Configuration, to chooseLambda Function Aliascorrosponding to the current stage:
// Use ${stageVariables.variableName} { "Lambda Function": "myFunction:${stageVariables.variableName}" }
- Inside
- Everytime we make changes to
- Canary Deployment:
- Related to
API Gateways. - Used for traffic splitting between different versions in
API Gateways. - Use
Promote Canaryoption to direct all traffic to latest version once our testing using traffic splitting is done. - After directing all traffic to latest version using
Promote Canaryoption, we can choose toDelete Canaryonce we are sure.
- Related to
- Encryption For Environment Variables In Lambda:
- By default
Lambdauses defaultKMSkey to encryptEnvironment Variables. AWS Lambdahas built-in encryption at rest and it's enabled by default.- When our
LambdausesEnvironment Variablesthey are automatically encrypted byDefault KMS Key. - When the
Lambdafunction is invoked,Environment Variablesare automaticallydecryptedand made available inLambda Function's code. - However, this only takes care of
Encryption at rest. But duringTransitfor e.g. when we are deploying theLambda Function, theseEnvironment Variablesare still transferred inPlain Text. - So, if
Environment Variablesposses sensitive information, we can enableEncryption in transit. - If we enable
Encryption in transitthenEnvironment Variable Valueswill be masked usingKMS Keyand we must decrypt it's contents insideLambda Functionsto get the actual values stored in variables. - While creating
KMS Keys, be sure to choose theRegionsame as ourLambda Function's Region. - Make sure to give our
Lambda Function's Rolea permission to useKMS Keyfeature insideKMS Key'spolicy.
- By default
- Retry Behavior in AWS Lambda:
Lambda Functionshave built-inRetry Behavior. i.e. When aLambda Functionsfails,AWS Lambdaautomatically attempts to retry the execution up to2 Timesif it was invokedAsynchronously (Push Events).- A lambda function could fail for different reasons such as:
- Logical or Syntactical error in Lambda Function's code.
- Network outage.
- A lambda function could hit the timeout.
- A lambda function run out of memory.
- And so on..
- When any of above things happen, Lambda function will throw an
Exception. How thisExceptionis handled depends upon how theLambda Functionwas invoked i.e.Synchronously or Asynchronously (Push Events). - If the
Lambda Functionwas invokedAsynchronously (Push Events)thenAWS Lambdawill automatically retry up to2 Times (With some time delays in between)on execution failure. - If we configure a
DLQ (Dead Letter Queue), it will collect thePayloadafter subsequent retry failures i.e. after2 Attempts. - If a function was invoked
Synchronouslythen calling application will receiveHTTP 429error when function execution fails. - If a
DLQ (Dead Letter Queue)is not configured forLambda Function, it will discard the event after 2 retry attempts.
- Container Reuse:
Lambda Functionexecutes inContainerized Environments.- Whenever we create or update a
Lambda Functioni.e. either the function code or configuration,AWS Lambdacreates a newContainer. - Whenever
Lambda Functionis executed first time i.e. after we create it or update,AWS Lambdacreates a newContainer. - Once the
Lambda Functionexecution is finished,AWS Lambdawill shut down theContainerafter a while. - Code written outside
Lambda Handlerwill be executed once perContainer. For e.g.// Below code will be executed once per container. const AWS = require('aws-sdk); AWS.config.update({ region: 'ap-south-1' }); const s3 = new AWS.S3(); // Below code (code inside Lambda handler) will be executed everytime Lambda Function is invoked. exports.handler = async (event, context) => { return "Hello Aditya"; };
- It's a good practice to write all initialisation code outside the
Lambda Handler. - If you have written any file in
\tmpand aContaineris reused forLambda Function Execution, that file will be available in subsequent invocations. - It will result in faster executions whenever
Containersare reused. - We do not have any control over when
AWS Lambdawill reuse theContaineror when it won't. - If we are spawning any background processes in
Lambda Functions, they will be executed only untilLambda Handlerreturns a response. Other time they will stayFrozen.
- Running a
Lambda Functioninside aVPCwill result inCold Starts.VPCalso introduce some delay before a function could execute which could result inCold Start. Resource Policiesgets applied at theAPI Gatewaylevel whereasIAM Policiesgets applied at theUser/Clientlevel.
- Most common architecture pattern that we almost find everywhere irrespective of whether we are using servers or going serverless.
- The most common form of
Multi-Tier Architectureis the3-Tier Architecture. Even theServerlessform will have same3-Tiersas below:Frontend/Presentation Tier.Application/Logic Tier.Database/Data Tier.
- In
Serverless 3-Tier Architecture,Database/Data Tier.- The
Database/Data Tierwill contain the databases (Data Stores) likeDynamoDB. Data Storesfalls into 2 categories:IAM Enableddata stores (overAWS APIs). These data stores allows applications to connect to them throughAWS APIs. For e.g.DynamoDB,Amazon S3,Amazon ElasticSearch Serviceetc.VPC Hosteddata stores (using database credentials). These data stores runs in hosted instances within aVPC. For e.g.Amazon RDS,Amazon Redshift,Amazon ElastiCache. And ofcourse we can install any database of our choice onEC2and use it here. For e.g. we can run aMongoDBinstance onEC2and connect to it throughServerless Lambda Functions.
- The
Application/Logic Tier.- This is where the core business logic of our
Serverless Applicationruns. - This is where core
AWS ServiceslikeAWS Lambda,API Gateway,Amazon Cognitoetc. come into play.
- This is where the core business logic of our
Frontend/Presentation Tier.- This tier interacts with backend through
Application/Logic Tier. - For e.g. Frontend could use
API Gateway Endpointto callLambda Functionswhich inturn interacts with data stores available in theDatabase/Data Tier. API Gateway Endpointscan be consumed by variety of applications such asWeb Appslike static websites hosted onS3,Mobile Application Frontends,Voice Enabled Devices Like Alexaor differentIoT Devices.
- This tier interacts with backend through
- Typical use case of
Serverless Architectureis theMicroservices Architecture Pattern. - The
Microservices Architecture Patternis an approach to developing single application as a suit of small services, each running in its own process and communicating with lightweight mechanisms, ofteb ab HTTP resource API. - These services are built around business capabilities and are independently deployable by fully automated deployment machinery.
- There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.
- The core idea of a
Microservices Architectureis to take a complex system and break it down into independent decoupled services that are easy to manage and extend. These services communicate over well defined APIs and are often owned by small self-contained teams. Microservices Architecturemakes applications easy to scale and faster to develop, enabling innovation and accelerating time to market for new features.- Each
Serviceperforms a single specific function. And because they are running independently, eachServicecan be updated, deployed and scaled to meet the demands of the application.
- Newer and slowly emmerging pattern.
- This is all about reducing dependents on 1 specific cloud provider and making our application even more resilient.
- There are several big companies today that offers
Serverless Compute ServiceslikeAWS Lambda. Some of the companies which are offering this service areGoogle Cloud Functions,Microsoft Azure Functions,IBM Cloud Functionsetc. - When we choose
Cloud Provider, we kind of gets locked in to continue using services offered by that particularCloud Provider. - For building
Cloud Provider Agnostic Serverless Applicationsor in other words - for buildingMulti Provider Serverless Applications, we can make use ofServerless Framework. - For building
Multi Provider Serverless Applications, the team behind theServerless Frameworkoffers a solution called asEvent Gatewayhttps://github.com/serverless/event-gateway. Event Gatewayis an open source tool and it is part of their offering calledServerless Platform.- The
Event Gatewayallows us to react to any event withServerless Functionshosted on differentCloud Providers. Event Gatewayalso allows us to send events from differentCloud Providersand we can react to these events usingServerless Functionsfrom anyCloud Provider.Event Gatewaytool is still under heavy development and not production ready yet (Today is 11 March 2020).
| Resource | Default Limit |
|---|---|
| Concurrent executions | 1,000 |
| Function and layer storage | 75 GB |
| Elastic network interfaces per VPC | 250 |
| Function memory allocation | 128 MB to 3,008 MB, in 64 MB increments. |
| Function timeout | 900 seconds (15 minutes) |
| Function environment variables | 4 KB |
| Function resource-based policy | 20 KB |
| Function layers | 5 layers |
| Function burst concurrency | 500 - 3000 (varies per region) |
| Invocation frequency (requests per second) | 10 x concurrent executions limit (synchronous – all sources) 10 x concurrent executions limit (asynchronous – non-AWS sources) Unlimited (asynchronous – AWS service sources |
| Invocation payload (request and response) | 6 MB (synchronous) 256 KB (asynchronous) |
| Deployment package size | 50 MB (zipped, for direct upload) 250 MB (unzipped, including layers) 3 MB (console editor) |
| Test events (console editor) | 10 |
/tmp directory storage |
512 MB |
| File descriptors | 1,024 |
| Execution processes/threads | 1,024 |
- Datatypes:
- Scaler: Represents exactly one value.
- For e.g. String, Number, Binary, Boolean, null.
KeysorIndexattributes only support String, Number and Binary scaler types.
- Set: Represents multiple Scaler values.
- For e.g. String Set, Number Set and Binary Set.
- Document: Represents complex structure with nested attributes.
- Fo e.g. List and Map.
- Scaler: Represents exactly one value.
Stringdatatype can store onlynon-emptyvalues.- Maximum data for any item in DynamoDB is limited to
400kb. Note: Item represents the entire row (like in RDBMS) of data. Setsare unordered collection of either Strings, Numbers or Binary values.- All values must be of same scaler type.
- Do not allow duplicate values.
- No empty sets allowed.
Listsare ordered collection of values.- Can have multiple data types.
Mapsare unordered collection ofKey-Valuepairs.- Ideal for storing JSON documents in DynamoDB.
- Can have multiple data types.
- DynamoDB supports 2 types of
Read Operations (Read Consistency):Strong Consistency:- The most up-to-date data.
- Must be requested explicitely.
Eventual Consistency:- May or may not reflect the latest copy of data.
- This is the default consistency for all operations.
- 50% Cheaper than
Strongly Consistent Readoperation.
- Internally, DynamoDB stores data in
Partitions. Partitionsare nothing butBlocks of memory.- A table can have
1 or more partitionsdepending on it's size and throughput. - Each
Partitionin DynamoDB can hold maximum10GBof data. - Partitioning e.g.:
- For
500 RCU and 500 WCU--->1 Partition. - For
1000 RCU and 1000 WCU--->2 Partitions.
- For
- For
Tablelevel operations, we need to instantiate and useDynamoDBclass fromaws-sdk:const AWS = require('aws-sdk'); AWS.config.update({ region: 'ap-south-1' }); const dynamoDB = new AWS.DynamoDB(); // Instantiating DynamoDB class for table level operations. dynamoDB.listTables({}, (err, data) => { if (err) { console.log(err); } else { console.log(JSON.stringify(data, null, 2)); } });
- For
Itemlevel operations, we need to instantiate and useDocumentClientfromDynamoDBclass fromaws-sdk:const AWS = require('aws-sdk'); AWS.config.update({ region: 'ap-south-1' }); const docClient = new AWS.DynamoDB.DocumentClient(); // Instantiate and use DocumentClient class for Item level operations. docClient.put({ TableName: 'adi_notes_app', Item: { user_id: 'test123', timestamp: 1, title: 'Test Note', content: 'Test Note Content..' } }, (err, data) => { if (err) { console.log(err); } else { console.log(JSON.stringify(data, null, 2)); } });
batchWrite()method allows us to perform multiple write operations (e.g. Put, Update, Delete) in one go.- Conditional writes in DynamoDB are
idempotent. i.e. If we make same conditional write requests multiple times, only the first request will be considered. document.query()allows us to fetch items from a specificpartition.document.scan()allows us to fetch items from allpartitions.- Pagination:
- At a time, any
document.query()ordocument.scan()operation can return maximum1mbdata in a single request. - If our
query/scanoperation has more records to return (after exceeding 1mb limit), we will receiveLastEvaluatedKeykey in the response. LastEvaluatedKeyis simply an object containingIndex Attributeof the next item up which the response was returned.- In order to retrieve further records, we must pass
LastEvaluatedKeyvalue underExclusiveStartKeyattribute in our subsequent query. - If there is no
LastEvaluatedKeyattribute present in DynamoDB query/scan response, it means we have reached the last page of data.
- At a time, any
- DynamoDB Streams:
- In simple words, its a
24 Hours Time-ordered Log. DynamoDB Streamsmaintain aTime-Ordered Logof all changes in a givenDynamoDB Table.- This log stores all the
Write Activitythat took place in the last24 hrs. - Whenever there are any changes made into
DynamoDB Tableand ifDynamoDB Streamis enabled for that table, these changes will be returned to theStreams. - There are several ways to consume and process data from
DynamoDB Streams:- We can use
Kinesis Adapteralong withKinesis Client Library.Kinesisis platform for processingHigh Volumestreaming data onAWS. - We can also make use of
DynamoDB Streams SDKto work withDynamoDB Streams. AWS Lambda Triggersalso allows us to work withDynamoDB Streams. This approach is much easy and intuitive.DynamoDB Streamswill invokeLambda Functionsbased on changes received by them.
- We can use
- In simple words, its a
AWS Step Functionsare the logical progression ofAWS Lambda Functions.- With
Step Functionswe can create visual workflows to co-ordinate or orchestrate differentLambda Functionsto work together. - A
Step Functioncan run upto a maximum period of1 Year. - We can use
Step Functionsto automate routine jobs like deployments, upgrades, migrations, patches and so on. Step Functionsallows us to combine differentLambda Functionsto buildServerless ApplicationsandMicroservices.- Just like
Lambda Functions, there is no need to provision any resources or infrastructure toStep Functions. - We simply use
ASL (Amazon Step Language)to define the workflows. It's a JSON based structured language. We use this language to define various steps as well as different connections and interactions between these steps inStep Functions. - The resulting workflow is called as the
State Machine. State Machineis displayed in a graphical form just like a flowchart.State Machinesalso has built-in error handling mechanisms. We can retry operations based on different errors or conditions.- Billing is on
Pay as you gobasis. We only pay for the trasitions betweenSteps. taskstep allows us to invokeLambda Functionfrom ourState Machineactivitystep allows us to run any code onEC2 Instances. It is similar totaskstep just thatactivitystep is notServerlesskind of step.- Whenever any
StepinState Machinefails, entireState Machinefails. HereStepsas inLambda Functionsor any errors or exceptions received byStep. - We can also use
CloudWatch Rulesto execute aState Machine. - We can use
Lambda Functionto triggerState Machine Execution. The advantage of this approach is thatLambda Functionssupport many triggers for their invocation. So we have numerous options to triggerLambda Functionand ourLambda Functionwill triggerState Machine ExecutionusingAWS SDK. - While building
State Machineand if it has anyLambda Functions (task states), always specifyTimeoutSecondsoption to make sure ourState Machinedoesn't get stuck or hung. - In
State Machine,catchfield is used to specifyError Handling Catch Mechanism.
- AWS SAM
Serverless Application Model. AWS SAMis just a simplified version ofCloudFormation Templates.- It seemlessly integrates into
AWS Deployment ToolslikeCodeBuild,CodeDeploy,CodePipelineetc. - It provides
CLIto build, test and deployServerless Applications. - Every
SAM Templatebegins with :AWSTemplateFormatVersion: "2010年09月09日" Transform: AWS::Serverless-2016年10月31日
- To deploy
SAMapplication usingCloudFormation Commands(Instead of usingSAM CLI):- It involves
2 Steps:- Package application and push it to
S3 Bucket. This step requiresS3 Bucketto be created in prior to runningCloudFormation Packagecommand. - Deploy packaged application.
- Package application and push it to
- Step 1: We need an
S3 Bucketcreated before we deploy. If we don't have one, then create it using following command:aws s3 mb s3://aditya-sam-app
- Step 2: Package application:
aws cloudformation package --template-file template.yaml --output-template-file output-sam-template.yaml --s3-bucket aditya-sam-app
- Step 3: Deploy application (Here, we will be using generated output SAM template file):
aws cloudformation deploy --template-file output-sam-template.yaml --stack-name aditya-sam-app-stack --capabilities CAPABILITY_IAM
- It involves
- To generate SAM project boilerplate from sample app:
sam init --runtime nodejs12.x
- To execute
Lambda Functionlocally withSAM CLI:# -e to pass event data to Lambda Function. This file must be present in the current location. sam local invoke HelloWorldFunction -e events/event.json # Alternatively, we can pass event data inline within the command by simply piping it as below. Here we are sending empty event data to Lambda Function. echo '{}' | sam local invoke HelloWorldFunction
SAM CLIalso allows to invokeLambda Functionslocally from within our application code. To do so, we have to startLambda Servicelocally usingSAM CLI:sam local start-lambda- To run
API Gatewayservice locally:- Navigate to folder where our
SAM Templateis located (e.g.template.yaml). - Execute following command to run
API Gateway Servicelocally:sam local start-api
- Navigate to folder where our
- To validate
SAM Templatelocally,- Navigate to folder where our
SAM Templateis located (e.g.template.yaml). - Execute following command to validate
SAM Templatelocally:sam validate
- Navigate to folder where our
- To deploy application using
SAM CLI:- It involves
2 Steps:- Package application and push it to
S3 Bucket. This step requiresS3 Bucketto be created in prior to runningSAM Packagecommand. - Deploy packaged application.
- Package application and push it to
- Step 1: We need an
S3 Bucketcreated before we deploy. If we don't have one, then create it using following command:aws s3 mb s3://aditya-sam-app
- Step 2: Package application:
sam package --template-file template.yaml --output-template-file output-sam-template.yaml --s3-bucket aditya-sam-app
- Step 3: Deploy application (Here, we will be using generated output SAM template file):
sam deploy --template-file output-sam-template.yaml --stack-name aditya-sam-app-stack --capabilities CAPABILITY_IAM
- It involves
- To view
Lambda Functionlogs usingSAM CLI:sam logs -n LAMBDA_FUNCTION_NAME --stack-name STACK_NAME -- tail # For e.g.: sam logs -n GetUser --stack-name aditya-sam-app-stack -- tail
AWS CodeCommit- It is a source control service which allows us to host our
Git Basedrepositories.
- It is a source control service which allows us to host our
AWS CodeBuild- It is a
Continious Integrationservice. We can use it toPackageand optionallyDeployour applications.
- It is a
AWS CodePipeline- It is a
Continious Deliveryservice. It allows us to automate entireDeploymentandRelease Cycles.
- It is a
- Setup 101.
- Initialize
Git Repositoryon local machine. - Step #1: Create
CodeCommit Repository:- Go to
CodeCommitinAWS Consoleand create new repository. - Go to
IAMinAWS Consoleand create new user. Provide:- Only
Programmatic Access. No need to provide access toAWS Console. - Attach
Existing Policy. Look forCodeCommitin policies. - It will show us
AWS Credentialsfor the user. Ignore them. - Under
Users, open that user and go toSecurity Credentials. Scroll down to seeHTTPS Git credentials for AWS CodeCommit. Click onGeneratebutton there. - It will show us
UsernameandPasswordfor this user. Download that.
- Only
- Go to
CodeCommitconsole and click onConnectbutton. - Copy
Repository URLfrom the popup. - On our local machine, we need to add
CodeCommit RepositoryasRemote Repositoryusing following command:git remote add origin CODECOMMIT_REPOSITORY_URL
- On our local machine, add upstream origin using following command (Repeat this for all local branches):
# 'origin' refers to the remote repository. i.e. CODECOMMIT_REPOSITORY_URL git push --set-upstream origin LOCAL_BRANCH - It will ask for credentials only once. Specify credentials we downloaded from
IAM Consolefor our created user.
- Go to
- Step #2: Setup
CodeBuild:- Go to
CodeBuildinAWS Console. - Before we create a
CodeBuild Project, we will need anIAM RolethatCodeBuildcan assume on be our half.- For e.g. When we create and deploy our
Serverless Project, it creates different resouces likeLambda Functions, APIS, DynamoDB Tables, IAM Rolesin the background usingCloudFormation. When we deploy from our computer,AWS Credentialsstored in environment variable of our computer are used. Now the same deployment has to run from aContainerized Environmentcreated byCodeBuild. So we must provide the same permissions toCodeBuildas we provided to the user which connects to AWS while deploying usingServerless Framework.
- For e.g. When we create and deploy our
- Go to
IAMinAWS Consoleand create newRole.- Under
Choose the service that will use this role, selectCodeBuildand click onContinue. - Select access (We can choose
Administrator Access) and click onReviewand create theRole. - Now, we can go ahead and create
CodeBuildproject.
- Under
- Go to
CodeBuildconsole and create a project.- Under
Source Provider, selectAWS CodeCommitoption. SelectCodeCommit Repository. - Under
Environment: How to build,- Select option
Use an image managed by AWS CodeBuild. - Select
Operating SystemasUbuntu. - Under
Runtime, selectNode.js. - Select
Runtime Version. - Under
Build Specifications, we will usebuildspec.ymlfile.
- Select option
- Under
Service Role, select theRolewe created. - Under
Advanced Settings, createEnvironment VariableasENV_NAME = dev. This way we can build similar project for different environments likeprod, stageetc.. - Continue and review the configuration and click on
Savebutton. Do not click onSave and Buildbutton.
- Under
- Go to
- Step #3: Create a
buildspec.ymlfile at root of our project.buildspec.ymlfile tellsCodeBuildwhat to do with the sourcecode it downloads from theCodeCommit Repository.- For e.g.
# buildspec.yml version: 0.2 # Note: Each version can use the different syntax. phases: # There are 4 different types of phases we can define here. viz. 'install', 'pre_build', 'build', 'post_build'. Under each phase, we can specify commands for CodeBuild to execute on our be half. If there are any runtime errors while executing commands in particular phase, CodeBuild will not execute the next phase. i.e. If the execution reaches the 'posrt_build' phase, we can be sure that build was successful. - install commands: - echo Installing Serverless.. # This is only for our reference. - npm i -g serverless # Install serverless globally in container. - pre_build commands: - echo Installing NPM dependencies.. - npm i # This will install all the dependencies from package.json. - build commands: - echo Deployment started on `date`.. # This will print current date. - echo Deploying with serverless framework.. - sls deploy -v -s $ENV_NAME # '$ENV_NAME' is coming from environment variable we setup above. - post_build commands: - echo Deployment completed on `date`..
- Commit
buildspec.ymlfile and deploy it toCodeCommit Repository.
- Step #4 (Optional): If we manually want to build our project,
- Go to
CodeBuild Console, select our project and click onStart Build.- Select the
CodeCommit BranchthatCodeBuildshould read from. - Click on
Start Buildbutton. - It will pull the code from selected branch in
CodeCommit Repository, and then run the commands we have specified inbuildspec.ymlfile.
- Select the
- Go to
- Step #5: Setup
CodePipeline:- Go to
CodePipelineinAWS Consoleand create a newPipeline.Source location:- Under
Source Provider, selectAWS CodeCommit. - Select the
RepositoryandBranch Name (Generally master branch). - We will use
CloudWatch Eventsto detect changes. This is the default option. We can change this to makeCodePipelineperiodically check for changes.- By using
CloudWatch Events i.e. default optionunderChange detection optionssetting, as soon as we push the change or an update to amaster branchonCodeCommit, thisPipelinewill get triggered automatically.
- By using
- Click next.
- Under
Build:- Under
Build Provideroption, selectAWS CodeBuild. - Under
Configure your projectoptions, selectSelect existing build projectand underProject name, select our existingCodeBuildproject. - Click next.
- Under
Deploy:- Under
Deployment provider, since our code deployment will be done throughServerless Frameworkin theCodeBuildstep and we have defined ourbuildspec.ymlfile that way, we need to selectNo Deploymentoption. - Click next.
- Under
AWS Service Role:- We need to create a necessary
RoleforPipeline. Click onCreate rolebutton. AWSwill automatically generatePolicywith necessaryPermissionsfor us. So simply clickAllowbutton.- Click
Next stepto review the configuration ofPipeline.
- We need to create a necessary
- Click on
Create Pipelinebutton to create and run thisPipeline
- Go to
- Now whenever we push changes to
master branch, our code will get automatically deployed usingCICD. - Step #6: Production Workflow Setup - Adding manual approval before production deployment with
CodePipeline.- Once our code gets deployed to
Dev Stage, it will be ready for testing. And it will trigger aManual Approvalrequest. The approver will approve or reject the change based on the outcome of testing. If the change gets rejected, thePipelineshould stop there. Otherwise, if the change is approved, the same code should be pushed toProduction Stage. Following are the steps to implement this workflow: - Go to
CodePipelineinAWS Consoleand click onEditbutton for our createdPipeline. - After
Build StageusingCodeBuild, click on+ Stagebutton to add new stage. - Give this new stage a name. e.g.
ApproveForProduction. - Click on
+ Actionto add a newAction.- Under
Action categoryoption, selectApproval. - Under
Approval Actionsoptions:- Give an
Action Name. For e.g.Approve. - Set
Approval TypetoManual Approvaloption.
- Give an
- Under
Manual approval configurationoptions:- We need to create an
SNS Topic.- Go to
SNS ConsoleunderAWS Consoleand click onCreate Topic. - Specify
Topic NameandDisplay Name. For e.g.Topic Name: cicd-production-approvalandDisplay Name: CICD Production Approval. - Click on
Create Topicbutton. - Now that the topic has been created, we must
Subscribeto the topic. WheneverCodePipelinetriggers theManual Approval, aNotificationwill be triggered to this topic. All the subscribers will be notified by Email for the approval. To setup this: - Click on
Create Subscriptionbutton. - Under
Protocol, selectEmailoption. - Under
Endpoint, add the email address and click theCreate Subscriptionbutton.' - This will trigger the confirmation. Only after we confirm our email address, the
SNSwill start sending notifications. SNSsetup is done at this point. We can head back toManual approval configurationoptions.
- Go to
- Under
SNS Topic ARN, select theSNS Topicwe just created above. - Under
URL For Review, we can specifyAPI URL or Project URL. - Under
Comments, specify comments if any. For e.g.Kindly review and approve. - Click on
Add Actionbutton.
- We need to create an
- Under
- After
Manual Approvalstage, click on+ Actionto add a newActionforProduction Build.- Under
Action categoryoption, selectBuild. - Under
Build Actionsoptions:- Give an
Action Name. For e.g.CodeBuildProd. - Set
Build ProvidertoAWS CodeBuildoption.
- Give an
- Under
Configure your projectoptions:- Select
Create a new build projectoption. It will exactly be the same as last one, only difference is it will use different value inEnvironment Variablesviz.Production. - Specify
Project Name. For e.g.cicd-production.
- Select
- Under
Environment: How to build,- Select option
Use an image managed by AWS CodeBuild. - Select
Operating SystemasUbuntu. - Under
Runtime, selectNode.js. - Select
Runtime Version. - Under
Build Specifications, we will usebuildspec.ymlfile. i.e. Select optionUse the buildspec.yml in the source code root directoryoption.
- Select option
- Under
AWS CodeBuild service roleoptions:- Select
Choose an existing service role from your accountoption. - Under
Role name, select the existing role we created while setting upCodeBuildabove.
- Select
- Under
Advanced Settings, createEnvironment VariableasENV_NAME = prod. This way we can build similar project for different environments likeprod, stageetc.. - Click on
Save build projectbutton. - We must provide
Input Artifactsfor this stage. So underInput Artifactsoptions:- Set
Input artifacts #1toMyApp.
- Set
- Click on
Add actionbutton.
- Under
- Click on
Save Pipeline Changesbutton. It will popup the confirmation. Click onSave and continuebutton. And we are all set.
- Once our code gets deployed to
- Initialize
AWS Lambdauses a decoupled permissions model. It uses 2 types of permissions:Invoke Permissions: Requires caller to only have permissions to invoke theLambda Functionand no more access is needed.Execution Permissions: It is used byLambda Functionitself to execute the function code.
- Give each
Lambda Functionit's ownExecution Role. Avoid using sameRoleacross multipleLambda Functions. This is because needs of ourLambda Functionsmay change over time and in that case we may have to alter permissions forRoleassigned to our functions. - Avoid setting
Wildcard PermissionstoLambda Function Roles. - Avoid giving
Full AccesstoLambda Function Roles. - Always provide only the necessary permissions keeping the
Role Policiesas restrictive as possible. - Choose only the required actions in the
IAM Policykeeping the policy as restrictive as possible. - Sometimes
AWSmight add newActionon aResourceand if ourPolicyis uing aWildcardon theActions, it will automatically receive this additional access to newActioneven though it may not require it. Hence it's a good and recommended idea to explicitely specify individualActionsin the policies and not useWildcards. - Always make use of
Environment VariablesinLambda Functionsto store sensitive data. - Make use of
KMS (Key Management System) Encryption Serviceto encrypt sensitive data stored inEnvironment Variables. - Make use of
KMS Encryption Serviceto encrypt sensitive dataAt RestandIn Transit. - Remember that
Environment Variablesare tied toLambda Function Versions. So it's a good idea to encrypt them before we generate the function version. - Never log the decrypted values or any sensitive data to console or any persistent storage. Remember that output from
Lambda Functionsis persisted inCloudWatch Logs. - For
Lambda Functionsrunning inside aVPC:- Use least privilege security groups.
- Use
Lambda FunctionspecificSubnetsandNetwork Configurationsthat allows only theLambda Functionsto accessVPC Resources.
- Following are the mechanisms available for controlling the
API Gatewayaccess:API KeysandUsage Plans.Client Certificates.CORS Headers.API Gateway Resource Policies.IAM Policies.Lambda Authorizers.Cognito User Pool Authorizers.Federated Identity AccessusingCognito.
- When using
CI/CD Pipelinesfor automated deployments, make sure appropriateAccess Controlis in place. For e.g. If pushing code tomaster branchtriggers ourDeployment Pipeline, then we must ensure that only the authorized team members have ability to update themaster branch.
- Keep declarations/instantiations outside
Lambda Handlers. This allowsLambda Handlersto reuse the objects whenContainersget reused. - Keep the
Lambda Handlerslean. i.e. Move the core logic ofLambda Functionoutside of theHandler Functions. - Avoid hardcoding, use
Environment Variables. - One function, one task. This is
Microservices Architecture. - Watch the deployment package size, remove unused dependencies. Check
package.json. Certain libraries are available by default onLambda Functions. We can remove those libraries frompackage.json. - Always keep an eye on
Lambda Logs. Monitor theExecution DurationandMemory Consumptions. - Grant only the necessary
IAM PermissionstoLambda Functions. Although the serverless team recommends usingAdminuser while developingServerless Framework Apps. - In production, choose to give
API KeywithPowerUserAccessat the maximum toServerless Framework User. Avoid givingAdministratorAccess. - Use
-cflag withServerless Framework Deployments. This will ensure that the commands will only generateCloudFormation Fileand not actually execute it. We can then execute thisCloudFormation Filefrom withinCloudFormation Consoleor as part of ourCI/CDprocess. - If we are creating any temporary files in
/tmp, make sure to unlink them before we exit out of our handler functions. - There are restrictions on how many
Lambda Functionswe can create in one AWS account. So make sure to delete unusedLambda Functions. - Always make use of error handling mechanisms and
DLQs. Put out codes inTry..Catchblocks, throw errors wherever needed and handle exceptions. Make use ofDead Letter Queues (DLQ)wherever appropriate. - Use
VPConly if necessary. For e.g. if ourLambda Functionneed access toRDSwhich is inVPCor anyVPCbased resources, then only put ourLambda FunctioninVPC. Otherwise there is no need to putLambda FunctioninVPC.VPCsare likely to add additional latency to our functions. - Be mindful of using
Reserved Concurrency. If we are planning to useReserved Concurrencythen make sure that otherLambda Functionsin our account have enoughconcurrencyto work with. This is because everyAWS Accountgets1000 Concurrent Lambda Executions Across Functions. So if we reserve concurrency for any function then concurrency limit will reduce by that amount for other functions. - Keep containers warm so they can be reused. This will reduce the latency introduced by
Cold Starts. We can easily schedule dummy invocations withCloudWatch Eventsto keep the functions warm. - Make use of frameworks like
AWS SAMorServerless Framework. - Use
CI/CDtools.
- Keep API definitions as lean as possible. i.e. move all the logic to backend
Lambda Functions. So unless absolutely necessary we could simply useLambda Proxy IntegrationwhereAPI Gatewaymerely acts as aProxybetweenCallerand aLambda Function. All the data manipulations happen at one place and i.e. insideLambda Handler Function. - Return useful responses back to the caller instead of returning generic server side errors.
- Enable logging options in
API Gatewaysso it is easier to track down failures to their causes. EnableCloudWatch Logsfor APIs. - When using
API GatewaysinProduction, it's recommended to useCustom Domainsinstead ofAPI Gateway URLs. - Deploy APIs closer to our customer's regions.
- Add
Cachingto get additional performance gains.
- Most important is
Table Design. DynamoDB Tablesprovide the best performance when designed forUniformed Data Access.DynamoDBdivides theProvisioned Throughputequally between all theTable Partitionsand hence in order to achieve maximum utilization ofCapacity Units, we must design ourTable Keysin such a way thatRead and Write Loadsare uniform acrossPartitions or Partition Keys. WhenDynamoDB TablesexperienceNon-uniformed Access Patterns, they will result in what is called asHot Partition. i.e. Some partitions are accessed heavily while others remain idle. When this happens, theIdle Provisioned Capacityis wasted while we still have to keep paying for it.DAX (DynamoDB Accelerator)doesn't come cheap.- When changing the provisioned throughput for any
DynamoDB Tablei.e.Scaling UporScaling Down, we must avoidTemporary Substantial Capacityscaling up. Note: Substantial increases inProvisioned Capacitiesalmost always result inDynamoDBallocating additionalPartitions. And when we subsequently scale the capacity down,DynamoDBwill not de-allocate previously allocatedPartitions. - Keep
Item Attribute Namesshort. This helps reduce the item size and thereby costs as well. - If we are to store large values in our items then we must consider compressing the
Non-Key Attributes. We can use technique likeGZipfor example. Alternatively, we can store large items inS3and only pointers to those items are stored inDynamoDB. Scanoperations scan the entire table and hence are less efficient thanQueryoperations. Thats why, AvoidScanoperations. Note:Filtersalways gets applied after theQueryandScanoperations are completed.- Applicable
RCUsare calculated before applying theFilters. - While performing read operations, go for
Strongly Consistent Readsonly if our application requires it. Otherwise always opt out forEventually Consistent Reads. That saves half the money. Note: Any read operations onGlobal Secondary IndexesareEventually Consistent. - Use
Local Sendary Indexes (LSIs)sparingly. LSIs share the same partitions i.e. Same physical space that is used by theDynamoDB Table. So adding more LSIs will use more partition size. This doesn't mean we shouldn't use them, but use them as per our application's need. - When choosing the projections, we can project up to maximum of
20 Attributes per index. So choose them carefully. i.e. Project as fewer attributes on to secondary indexes as possible. If we just needKeysthen use onlyKeys, it will produce the smallestIndex. - Design
Global Secondary Indexes (GSIs)for uniform data access. - Use
Global Secondary Indexes (GSIs)to createEventually Consistent Read Replicas.
- Always use
TimeoutsinTask States. - Always handle errors with
RetriersandCatchers. - Use
S3to store large payloads and pass only thePayload ARNbetween states.
- Setup:
# Install serverless globally. sudo npm i -g serverless # (Optional) For automatic updates. sudo chown -R $USER:$(id -gn $USER) /Users/adiinviter/.config # Configure user credentials for aws service provider. sls config credentials --provider aws --key [ACCESS_KEY] --secret [SECRET_KEY] -o # Create aws nodejs serverless template. sls create -t aws-nodejs # Init npm. npm init -y # Install serverless-offline and serverless-offline-scheduler as dev dependancies. npm i serverless-offline serverless-offline-scheduler --save-dev
- After running above commands, update the
serviceproperty inserverless.ymlwith your service name.- NOTE:
serviceproperty inserverless.ymlfile is mostly your project name. It is not a name of your specific lambda function.
- NOTE:
- Add following scripts under
package.json:{ "scripts": { "dev": "sls offline start --port 3000", "dynamodb:start": "sls dynamodb start --port 8082", } } - Update
serverless.ymlfile with following config:service: my-project-name plugins: - serverless-offline # Add this plugin if you are using it. - serverless-offline-scheduler # Add this plugin if you are using it. provider: name: aws runtime: nodejs12.x stage: dev # Stage can be changed while executing deploy command. region: ap-south-1 # Set region.
- To add new lambda function with api endpoint, add following in
serverless.yml:functions: hello: handler: src/controllers/users.find events: - http: path: users/{id} method: GET request: parameters: id: true
- To run project locally:
# Using npm npm run dev # Directly using serverless sls offline start --port 3000
- To invoke lambda function locally:
sls invoke local -f [FUNCTION_NAME] - To run lambda crons locally:
sudo sls schedule
- To deploy:
# To deploy all lambda functions. sls deploy -v # To deploy a specific function. sls deploy -v -f [FUNCTION_NAME] # To deploy project on a different stage (e.g. production) sls deploy -v -s production
- To view logs for a specific function in a specific stage (e.g. dev, prod):
# Syntax: sls logs -f [FUNCTION_NAME] -s [STAGE_NAME] --startTime 10m # Use -t to view logs in real time. Good for monitoring cron jobs. sls logs -f [FUNCTION_NAME] -s [STAGE_NAME] -t # Example #1: sls logs -f sayHello -s production --startTime 10m # Example #2: sls logs -f sayHello -s dev --startTime 15m
- To remove project/function (This will delete the deployed
CloudFormation Stackwith all the resources):# To remove everything. sls remove -v -s [STAGE_NAME] # To remove a specific function from a specific stage. sls remove -v -f sayHello -s dev
- To create a simple cron job lambda function, add this to
serverless.yml:# Below code will execute 'cron.handler' every 1 minute cron: handler: /src/cron.handler events: - schedule: rate(1 minute)
- To configure
Lambda Functionto run underVPC:- We need
Security Group IdsandSubnet Ids, to get them:- Under
AWS Console, go toVPC. - Go to
Security Groupsand copyGroup ID. We can copydefaultone. Just oneSecurity Group Idis enough though. Specify it undersecurityGroupIds. - Go to
Subnets. EachAWS Regionhas number ofSubnets. CopySubnet IDand specify them undersubnetIdsoption. AlthoughServerlessrequiresAt least 2subnets, We can copy all the subnets and specify them undersubnetIdsoption.
- Under
- Under
serverless.ymlfile, set:functions: hello: # This function is configured to run under VPC. handler: handler.hello vpc: securityGroupIds: # We can specify 1 or more security group ids here. - sg-703jd2847 subnetIds: # We must at least provide 2 su1bnet ids. - subnet-qndk392nc2 - subnet-dodh28dg2b - subnet-ondn29dnb2
- We need
- Browse and open terminal into empty project directory.
- Execute :
# Create aws nodejs serverless template sls create -t aws-nodejs # Init npm. npm init -y # Install serverless-offline and serverless-offline-scheduler as dev dependancies. npm i serverless-offline serverless-offline-scheduler --save-dev
- Add following scripts under
package.json:{ "scripts": { "dev": "sls offline start --port 3000", "dynamodb:start": "sls dynamodb start --port 8082", } } - Open
serverless.ymland editservicename as well as setupprovider:service: s3-notifications provider: name: aws runtime: nodejs12.x region: ap-south-1 plugins: - serverless-offline # Add this plugin if you are using it. - serverless-offline-scheduler # Add this plugin if you are using it.
- To install
Serverlessglobally:sudo npm i -g serverless
- For automatic updates, after above command, run:
sudo chown -R $USER:$(id -gn $USER) /Users/adiinviter/.config
- To configure aws user credentials, run:
# -o: To overwrite existing credentials if there are any set already. sls config credentials --provider aws --key [ACCESS_KEY] --secret [SECRET_KEY] -o - After running above command, credentials will get set under following path:
~/.aws/credentials
- Each service is a combination of multiple
Lambda Functions. - To create
NodeJS Serverless Service:sls create --t aws-nodejs
- To invoke a
Lambda Functionlocally:# Syntax sls invoke local -f [FUNCTION_NAME] # Example sls invoke local -f myfunct
- To pass data to lambda function,
# Syntax sls invoke local -f [FUNCTION_NAME] -d [DATA] # Example #1: to pass a single string value into lambda function. sls invoke local -f sayHello -d 'Aditya' # Example #2: to pass a object into lambda function. sls invoke local -f sayHello -d '{"name": "Aditya", "age": 33}'
eventobject holds any data passed into lambda function. To access it:- Accessing data directly passed as string as shown in
Example #1above:// Example #1: to pass a single string value into lambda function. // sls invoke local -f sayHello -d 'Aditya' module.exports.hello = async event => { const userName = event; // Data is available on 'event'. return { statusCode: 200, body: JSON.stringify({message: `Hello ${userName}`}) }; };
- Accessing object data passed as shown in
Example #2above:// Example #2: to pass a object into lambda function. // sls invoke local -f sayHello -d '{"name": "Aditya", "age": 33}' module.exports.hello = async event => { const {name, age} = event; return { statusCode: 200, body: JSON.stringify({message: `Hello ${name}, Age: ${age}`}) }; };
- Accessing data directly passed as string as shown in
- For local development only, use
Serverless Offlineplugin. - Plugin:
https://www.npmjs.com/package/serverless-offline https://github.com/dherault/serverless-offline - To install:
npm i serverless-offline --save-dev
- Install Serverless Offline plugin.
- Under
serverless.yml, add:plugins: - serverless-offline
- Under
package.json, add new run script:"dev": "sls offline start --port 3000"
- Run:
npm run dev
- To deploy serverless service, run:
# -v: For verbose. sls deploy -v
- Use following plugin to setup DynamoDB locally (for offline uses):
https://www.npmjs.com/package/serverless-dynamodb-local https://github.com/99xt/serverless-dynamodb-local#readme - To setup:
npm i serverless-dynamodb-local
- Register
serverless-dynamodb-localinto serverless yaml:plugins: - serverless-dynamodb-local
- Install DynamoDB into serverless project:
sls dynamodb install
- APIs can be secured using
API Keys. - To generate and use
API Keyswe need to modifyserverless.ymlfile:- Add
apiKeyssection underprovider:provider: name: aws runtime: nodejs12.x ######################################################## apiKeys: # For securing APIs using API Keys. - todoAPI # Provide name for API Key. ######################################################## stage: dev # Stage can be changed while executing deploy command. region: ap-south-1 # Set region. timeout: 300
- Route by Route, specify whether you want it to be
privateor not. For e.g.functions: getTodo: # Secured route. handler: features/read.getTodo events: - http: path: todo/{id} method: GET ######################################################## private: true # Route secured. ######################################################## listTodos: # Non-secured route. handler: features/read.listTodos events: - http: path: todos method: GET
- After deploying we will receive
api keys. Copy it to pass it under headers.λ serverless offline start Serverless: Starting Offline: dev/ap-south-1. Serverless: Key with token: d41d8cd98f00b204e9800998ecf8427e # Here is our API Key token. Serverless: Remember to use x-api-key on the request headers - Pass
api keyunderx-api-keyheader while hitting secured route.x-api-key: d41d8cd98f00b204e9800998ecf8427e
- If a wrong/no value is passed under
x-api-keyheader, then we will receive403 Forbiddenerror.
- Add
- Useful commands for project
05-S3-Notifications:- Setup aws profile for
Serverless S3 Localplugin:aws s3 configure --profile s3local # Use following credentials: # aws_access_key_id = S3RVER # aws_secret_access_key = S3RVER
- Trigger S3 event - Put file into local S3 bucket:
aws --endpoint http://localhost:8000 s3api put-object --bucket "aditya-s3-notifications-serverless-project" --key "ssh-config.txt" --body "D:\Work\serverless05円-S3-Notifications\tmp\ssh-config.txt" --profile s3local
- Trigger S3 event - Delete file from local S3 bucket:
aws --endpoint http://localhost:8000 s3api delete-object --bucket "aditya-s3-notifications-serverless-project" --key "ssh-config.txt" --profile s3local
- Setup aws profile for
- After running
sls deploy -v, error:The specified bucket does not exist:- Cause: This issue occurs when we manually delete S3 bucket from AWS console.
- Fix: Login to AWS console and delete stack from
CloudFormation. - Dirty Fix (Avoid): Delete
.serverlessdirectory from project (Serverless Service). - Full Error (Sample):
Serverless: Packaging service... Serverless: Excluding development dependencies... Serverless: Uploading CloudFormation file to S3... Serverless Error --------------------------------------- The specified bucket does not exist Get Support -------------------------------------------- Docs: docs.serverless.com Bugs: github.com/serverless/serverless/issues Issues: forum.serverless.com Your Environment Information --------------------------- Operating System: darwin Node Version: 13.7.0 Framework Version: 1.62.0 Plugin Version: 3.3.0 SDK Version: 2.3.0 Components Core Version: 1.1.2 Components CLI Version: 1.4.0