Site Loader
Rock Street, San Francisco

component, occurs due to customer’s underestimation of security in the whole virtual environment. It could be due to not performed updates on the system or firmware, but very often it is due to weak passwords and missing or default state of firewall Jak15. 2.4   Security of FaaS (Serverless) Security of serverless systems is currently an open research question. Hosting arbitrary user code in containers on multitenant systems is a dangerous proposition, and care must be taken when constructing and running function containers to prevent vulnerabilities. This intersection of remote procedure calls and container security represents a significant real world test of general container security. Therefore, although serverless platforms are able to carefully construct function containers and restrict function permissions, increasing the chances of secure execution, further study is needed to assess the attack surface within function execution environments. 3. Portability Problem There are many types and providers of Cloud Computing services and problems might arise, when organizations would like to move their applications from one provider to another. This is less of a problem for SaaS or PaaS applications as customer can ask new provider to build the same environment where the applications can easily be moved to and in PaaS model cloud provider is responsible for maintaining all hardware, infrastructure and middleware. But bigger problems may arise when migration occurrs in the IaaS cloud model as the customer is responsible for virtual infrastructure – topology and networking, virtual machines and operating systems. So if the customer would like to migrate his environment, he would need to move his virtual machines. These environments are usually being built by scripts. In many respects, scripts are more powerful then web based GUI (Graphical User Interface), so all the deployment scripts would need to be rewritten for migration according to the new provider’s environment. Usually, this is not a trivial task as different providers use different syntax and the conception of scripts could also differ. Some providers use very specific functions, which other providers might not offer. If a customer uses such a feature, they are unable to move their environment to another provider. This status is often called “vendor lock-in”. The problem of portability is one of the biggest problems, which Cloud Computing users have to deal with. Currently there are limitations for creating an automated solution to simplify environment migration between different cloud providers. 4. Serverless Cloud Mechanism Serverless Computing also known as Function as a Service (FaaS) is emerging as a new paradigm for the deployment of cloud applications, largely due to the recent shift of enterprise application architectures to containers and microservices. On one hand, it provides developers  with a simpli?ed programming model for creating cloud applications that abstracts away most, if not all, operational concerns; it lowers the cost of deploying cloud code by charging for execution time rather than resource allocation and it is a platform for rapidly deploying small pieces of code that responds to events. On the other hand, deploying such applications in a serverless platform is challenging and requires abandoning platform design decisions which concern quality-of-service monitoring, scaling and fault-tolerance  schemes. The evolution of serverless computing is shown in Figure 4. From the perspective of a cloud provider, serverless computing provides an additional opportunity to   Figure 4: Evolution of serverless computing control the entire development stack, reduce operational costs by ef?cient optimization and management of cloud resources, and enabling     a serverless ecosystem that encourages the deployment of additional  cloud services. Serverless platforms promise new capabilities that make writing scalable microservices easier and cost effective, positioning themselves as the next step in the evolution of cloud computing architectures. Most of the prominent cloud computing providers including Amazon, IBM, Microsoft, and Google have recently released serverless computing capabilities. There are also several open-source efforts including the OpenLambda project. Cloud services are on-demand computing services which are cheap, flexible and easy to handle. Still, there are lots of software formalities to deal with before executing algorithms in the cloud, such as the process of installing the server instances and installing a database server to store and manage data. But with serverless infrastructure, this becomes easier. All these formalities can be ignored and the concept to deal with is triggers and responses. In serverless architecture the program only deals with the events and virtual store and avoiding the real interaction with servers and the complexity associated with it. Beyond the technical convenience of reducing boiler-plate code, the economics of serverless platforms billing such as AWS Lambda have a significant impact on the architecture and design of systems. Previous studies have shown reductions of costs in laboratory experiments Vil17 4.1 Serverless Architecture There are a lot of misconceptions surrounding serverless starting with the name. Servers are still needed, but developers need not concern themselves with managing those servers. Decisions such as the number of servers and their capacity are taken care of by the serverless platform, with server capacity automatically provisioned as needed by the workload. This provides an abstraction where computation is disconnected from where it is going to run. The core capability of a serverless platform is that of an event processing system. The service must manage a set of user defined functions and stop the functions when they are no longer needed. The challenge is to implement such functionality while considering metrics such as cost, scalability, and fault tolerance. Fail-over and load balancing are critical for handling usage loads exceeding the capacity of a single machine Adz17. With AWS Lambda, where application developers are no longer in control of the server process, usage is billed only when an application actively processes events, not when it is waiting which means that application idle time is free. 4.2 Characteristics Serverless Computing There are a number of characteristics that help distinguish the various serverless platforms. Cost: Typically the usage is metered and users pay only for the time and resources used when serverless functions are running. This ability to scale to zero instances is one of the key differentiators of a serverless platform. The resources that are metered and the pricing models vary among providers. Performance and limits: There are a variety of limits set on the runtime resource requirements of serverless code, including the number of concurrent requests, and the maximum memory and CPU resources available to a function invocation. Some limits may be increased when user’s need grow. Programming languages: Serverless services support a wide variety of programming languages including Javascript, Java, Python, Go, C#, and Swift. Most platforms support more than one programming language. Some of the platforms also support extensibility mechanisms for code written in any language as long as it is packaged in a Docker image that supports a well-defined API. Programming model: serverless platforms typically execute a single main function that takes a dictionary as input and produces a dictionary as output. Composability: The platforms generally offer some way to invoke one serverless function from another, but some platforms provide higher level mechanisms for composing these functions and may make it easier to construct more complex serverless apps. Deployment: Typically, developers just need to provide a file with the function source code or package as an archive with multiple files inside a Docker image. Security and accounting: Serverless platforms are multi-tenant and must isolate the execution of functions between users and provide detailed accounting so users understand how much they need to pay. Monitoring and debugging: Every platform supports basic debugging by using print statements that are recorded in the execution logs. Additional capabilities may be provided to help developers find bottlenecks, trace errors, and better understand the cicumstances of function execution Bal17. 4.3   Types of serverless applications One use case of serverless application is image file processing. When users upload large photos to a server, and the system needs to produce alternative versions of those images as shown in Figure 5. Figure 5: Image file processing using AWS Lambda Serverless is suitable for short-running  stateless  event-driven applications: Microservices Mobile Backends Bots Serverless is not sutable for long-running  stateful number crunching applications: Databases Deep Learning Training Heavy-Duty Stream Analytics  Spark/Hadoop Analytics  Numerical Simulation Video Streaming 4.3.1  Deep Learning Deep learning, driven by large neural network models, is overtaking traditional machine learning methods for understanding unstructured and perceptual data domains such as speech, text, and vision. The rise of deep learning Lec15 from its roots to becoming the state-of-the-art of AI has been fueled by three recent trends: the explosion in the amount of training data, the use of accelerators such as graphics processing units (GPUs), and advancements in the design of models used for training. These three trends have made the task of training deep layer neural networks with large amounts of data both tractable and useful. Using any of the deep learning frameworks (e.g., Tensorflow Aba16), users can develop and train their models. Neural network models range in size from small (5MB) to very large (500MB). Training neural networks can take a significant amount of time, and the goal is to find suitable weights for the different variables in the neural network. Once the model training is complete, it can be used for inferencing serving applying the trained model on new data in domains such a natural language processing, speech recognition, or image classification. 4.3.2 Deep Learning Limitations on Serverless Even though the amount of resources available in serverless computing environments are increasing all the time, the resources currently available are still limited. For example, ephemeral disk capacity available for AWS Lambda functions is limited to 512MB, which limits the use of serverless platforms to serve with large neural network models, which can be larger than 500MB. Increased cost is not always correlated with better performance. There is a need for tools that analyze previous function executions and and suggest changes in declared resources. Another option would be to scale the container vertically Ald17 for optimal cost/performance based on a customer’s predefined budget and performance targets. 4.4       Commercial platforms Amazon’s AWS Lambda Aws17 was the first serverless platform and it defined several key dimensions including cost, programming model, deployment, resource limits, security, and monitoring. Supported languages include Node.js, Java, Python, and C#. Initial versions had limited composability but this has been addressed recently. The platform takes advantage of a large AWS  ecosystem of services and makes   it easy to use Lambda functions as event handlers and to provide glue code when composing services. Currently available as an Alpha release, Google Cloud Functions Goo17 provides basic FaaS functionality to run serverless functions written in Node.js in response to HTTP calls or events from some Google Cloud services. The functionality is currently limited but expected to grow in future versions. Microsoft Azure Functions Azu17 provides HTTP webhooks and integration with Azure services to run user provided functions. The platform supports C#, F#, Node.js, Python, PHP, bash, or any executable. The runtime code is open-source and available on GitHub under an MIT License. To ease debugging, the Azure Functions CLI provides a local development experience for creating, developing, testing, running, and debugging Azure Functions. IBM OpenWhisk Ope17 provides event-based serverless programming with the ability to chain serverless functions to create composite functions. It supports Node.js, Java, Swift, Python, as well as arbitrary binaries embedded in a Docker container. OpenWhisk is available on GitHub under an Apache open source license. 4.5       Serverless Benefits and drawbacks Compared to IaaS platforms, serverless architectures offer different tradeoffs in terms of control, cost, and flexibility. In particular, they force application developers to carefully think about the cost of their code when modularizing their applications, rather than latency, scalability, and elasticity, which is where significant development effort has traditionally been spent. The serverless paradigm has advantages for both consumers and providers. From the consumer perspective, a cloud developer no longer needs to provision and manage servers, VMs, or containers as the basic computational building block for offering distributed services. Instead the focus is on the business logic, by defining a set of functions whose composition enables the desired application behavior. The stateless programming model gives the provider more control over the software stack, allowing them to, among other things, more transparently deliver security patches and optimize the platform. There are, however, drawbacks to both consumers and providers. For consumers, the FaaS model offered by the platform may be too constraining for some applications. For example, the platform may not support the latest Python version, or certain libraries may not be available. For the provider, there is now a need to manage issues such as the lifecycle of the user’s functions, scalability, and fault tolerance in an application-agnostic manner. This also means that developers have to carefully understand how the platform behaves and design the application around these capabilities. One property of serverless platforms that may not be evident at the outset is that the provider tends to offer an ecosystem of services that augment the user’s functions. For example, there may be services to manage state, record and monitor logs, send alerts, trigger events, or perform authentication and authorization. Such rich ecosystems can be attractive to developers, and present another revenue opportunity for the cloud provider. However, the use of such services brings with it a dependence on the provider’s ecosystem, and a risk of vendor lock-in. 5. Serverless programming model Serverless functions have limited expressiveness as they are built to scale. Their composition may be also limited and tailored to support cloud elasticity. To maximize scaling, serverless functions do not maintain state between executions. Instead, the developer can write code in the function to retrieve and update any needed state. The function is also able to access a context object that represents the environment in which the function is running (such as a security context). 6. Serverless use cases and workloads Serverless computing has been utilized to support a wider range of applications. From a functionality perspective, serverless and more traditional architectures may be used interchangeably. The determination of when to use serverless will likely be influenced by other non-functional requirements such as the amount of control over operations required, cost, as well as application workload characteristics. From a cost perspective, the benefits of a serverless architecture are most apparent for bursty, compute intensive workloads. Bursty workloads fare well because the developer offloads the elasticity of the function to the platform, and just as important, the function can scale to zero, so there is no cost to the consumer when the system is idle. Compute intensive workloads are appropriate since in most platforms today, the price of a function invocation is proportional to the running time of the function. Hence, I/O bound functions are paying for compute resources that they are not fully taking advantage of. In this case, a multi-tenant server application that multiplexes requests may be cheaper to operate. From a programming model perspective, the stateless nature of serverless functions lends themselves to application structure similar to those found in functional reactive programming Bai13. This includes applications that exhibit event-driven and flow-like processing patterns. 7. Serverless challenges and open problems 7.1 System level challenges Here is a list of challenges at the systems level. Cost: Cost is a fundamental challenge. This includes minimizing the resource usage of a serverless function, both when it is executing and when idle. Another aspect is the pricing model, including how it compares to other cloud computing approaches. For example, serverless functions are currently most economical for CPU-bound computations, whereas I/O bound functions may be cheaper on dedicated VMs or containers. Cold start: getting serverless code ready to run Resource limits: Resource limits are needed to ensure that the platform can handle load spikes, and manage attacks. Security: Strong isolation of functions is critical since functions from many users are running on a shared platform. Scaling: The platform must ensure the scalability and elasticity of users’ functions. Legacy systems: It should be easy to access older cloud and non-cloud systems from serverless code running in serverless platforms. 7.2 Programming model and DevOps challenges Tools: Traditional tools that assumed access to servers to be able to monitor  and debug applications aren’t applicable in serverless architectures, and new approaches are needed. Deployment: Developers should be able to use declarative approaches to control what is deployed and tools to support it. Monitoring and debugging: As developers no longer have servers that they can access, serverless services and tools need to focus on developer productivity. IDEs: Higher level developer capabilities, such as refactoring functions (e.g., splitting and merging functions), reverting to an older version, etc. will be needed and should be fully integrated with serverless platforms. Composability: This includes being able to call one function from another, creating functions that call and coordinate a number of other functions, and higher level constructs such as parallel executions and graphs. Long running: Currently serverless functions are often limited in their execution time. There are scenarios that require long running logic. State: Real applications often require state, and it’s not clear how to manage state in stateless serverless functions – programing models, tools, libraries etc. will need to provide necessary levels of abstraction. Concurrency: Express concurrency semantics, such as atomicity (function executions need to be serialized) Recovery semantics: Includes exactly once, at most once, and at least once semantics. Code granularity: Currently, serverless platforms encapsulate code at the granularity of functions. It’s an open question whether coarser or finer grained modules would be useful. 8. Conclusion Cloud Computing is going through an evolution of the trend towards higher levels of abstractions in cloud programming models, and currently exemplified by the Function-as-a-Service (FaaS) model where developers write small stateless code snippets and allow the platform to manage the complexities of scalably executing the function in a fault-tolerant manner. This seemingly restrictive model nevertheless lends itself well to a number of common distributed application patterns, including compute-intensive event processing pipelines. Most of the large cloud computing vendors have released their own serverless platforms, and there is a tremendous amount of investment and attention around this space in industry. Unfortunately, there has not been a corresponding degree of interest in the research community. There still is a wide variety of technically challenging and intellectually deep problems in this space, ranging from infrastructure issues such as optimizations to the cold start problem to the design of a composable programming model. The serverless paradigm may eventually lead to new kinds of programming models, languages, and platform architectures and that is certainly an exciting area for the research community to participate in and contribute to.

Post Author: admin

x

Hi!
I'm Jeremy!

Would you like to get a custom essay? How about receiving a customized one?

Check it out