Function-as-a-Service (FaaS) provides an attractive cloud model in which the user implements fine-grained functions connected in an event-driven application and deploys them into a FaaS platform. The FaaS platform is responsible for the automatic scaling and resource-provisioning of the deployed functions. Recently, there is a growing trend towards the execution of compute-intensive FaaS functions for tasks such as image processing that run for several seconds.
However, due to the billing policies followed by most commercial FaaS offerings, the execution of these functions can incur significantly higher costs. Moreover, due to the abstraction of underlying processor architectures on which the functions are executed, the optimization of these functions is challenging. As a result, most FaaS functions use pre-compiled libraries generic to x86-64 leading to performance degradation.
In this paper, we examine the underlying processor architectures for Google Cloud Functions (GCF) and determine their prevalence across the 19 available GCF regions. We modify, adapt, and optimize a representative set of six compute-intensive FaaS workloads using Numba, a JIT compiler for Python based on LLVM, and present results wrt performance, memory consumption, and costs on GCF. Results from our experiments show that the optimization of FaaS functions can improve performance by 44.2x and save costs by 76.8% on average for the six functions.
Our results show that optimization of the FaaS functions for the specific architecture is very important. We achieved a maximum speedup of 1.79x by tuning the function especially for the instruction set of the underlying processor architecture.