About my Thesis

Cloud computing is nowadays well spread and reaching its maturity phase as a technology. Users and companies find that, through the use of Virtual Machines (VMs), the need of over-provisioning of resources has been reduced. Moreover, given the possibility of on-demand and elastic provisioning of resources, the service is more easily adapted to different circumstances, such as number of clients connected to the same service at the same time or adaptation of the computational power (the platform is dynamically adapted to the load of work and computational requirements). As a consequence, Cloud Computing is able to provide a better Quality of Service to the user, while making a more efficient use of energy than traditional architectures.

In order to supply the needed resources, Cloud providers build complex datacenters where the resources are stored, to which the clients connect. This approach of centralized datacenters is, nowadays, the most realistic one regarding the deployment of heavy computation services. This architecture relies on a robust communication infrastructure between distant clients, obtaining a computing power otherwise unattainable. However, centralized architectures suffer downsides such as delays in traffic, replication of data and scalability issues related with the physical constraints of datacenters resources. Moreover, it forces other actors involved in the communication (such as Internet Service Providers) to oversize their infrastructure in consequence [1].

We show in our work that the use of an approach which centralizes computation in datacenters can become counterproductive both from an energy-efficient and Quality-of-Service perspective. We focus on the network interconnecting the final user and the datacenter, which has been neglected in bibliography, and demonstrate that in those scenarios where users are geographically close to each other and the computation can be split in smaller chunks (such as it is the case for smart-cities infrastructures), energy consumption can be drastically reduced through the use of a decentralized approach while providing at the same time a better Quality-of-Service.