I believe this is one of the most frustrating thing ever while using AWS lambda. Word on the street is that coding is 30% of the time, the other 70% is trying to get layers working.
A minor version mismatch between your runtime and your dependency? Nope, won’t work. Oh, your dependency has another dependency that doesn’t work on the underlying lambda’s OS environment? Nope, sorry. You compiled on a machine incompatible with the architecture? Haha, jokes on you. The worst part ? Every time you upload is a gamble because you won’t know before running your function in test. To get 1 layer right, you probably have a gazillion version of the layer by that time. WTF AWS, I just want to pip install a package man!
Layers proper
Ok, enough ranting, I’m gonna jump right into how I build layers. It’s probably the easiest way I can think off. I’ll be using a python runtime as a demo.
Step 1: Compiling the dependency
I strong suggest that you use AWS cloudshell to work on this step. One being that CloudShell has internet access and two, it eliminates much of the architecture uncertainty because cloudshell runs on x86. I know this sounds very stupid but from my experience, even different account’s cloudshell could have a different effect on the compiled dependency. I once tried to upload a compiled zip from account A to a layer in account B, didnt work! WTF.
Since I’m using python, I will select a specific python version to use. Depending on what language you want to use, please do so accordingly.
sudo yum install python3.11
python3.11 -m venv venv311
source venv311/bin/activate
Then create a folder called python and put your intended dependency names in it.
mkdir python
cd python
nano requirements.txt
For my example, I will add paramiko for SSH purpose. It’s a pristine example to showcase that 1 dependency actually can have sub dependencies which lambda runtime does not support if your version is wrong. You must use the correct version of paramiko, bycrypt and cryptography to make paramiko work. Another brain F**K.
Here’s the requirements.txt
cryptography==3.4.8
bcrypt==3.2.2
paramiko==3.5
Save the file, then execute the following.
pip install -r requirements.txt -t .
This will install the dependencies into the python folder we are in, and not the virtual environment.
Sometimes, the dependencies have some dist-info files, which takes up space. Lambda layers needs to be smaller than a certain size, else you have the complication of using S3. Typically, I will run the command below to trim the size down.
find . -name "*.dist-info" -exec rm -rf {} +
Now, go out of the folder and zip the file.
cd ../
zip -r9 python-layer.zip python/
Then download the file


Step 2: Upload to Lambda
This is the easy part. Just make sure that your runtime and architecture are set correctly. For me, that will be python3.11 and x86.

Conclusion
Well, I can only show you what I know works for me, but as always, it’s a gamble with Lambda layers. It TRULY baffles me why AWS does not just provide a simple section when creating layers to just allow us to “Add Internet Dependencies”, then we can paste in our requirements.txt or package.json. Would really appreciate if some Lambda engineer can enlighten me on the rationale, a AWS Muggle like me just aint seeing the magic.