Merge pull request #47 from gianarb/feature/removing-config-and-multi-provider

Removed multi provider and yml configuration
This commit is contained in:
Gianluca Arbezzano 2017-09-30 13:59:07 +01:00 committed by GitHub
commit d2c82ad415
10 changed files with 14 additions and 563 deletions

105
README.md
View File

@ -4,75 +4,17 @@
[![Build [![Build
Status](https://travis-ci.org/gianarb/orbiter.svg?branch=master)](https://travis-ci.org/gianarb/orbiter) Status](https://travis-ci.org/gianarb/orbiter.svg?branch=master)](https://travis-ci.org/gianarb/orbiter)
Public and private cloud or different technologies like virtual machine or Orbiter is an easy to run autoscaler for Docker Swarm. It is designed to work
containers. Our applications and our environments require to be resilient out of the box.
doesn't matter where they are or which services are you using.
This project is a work in progress cross platform open source autoscaler.
We designed in collaboration with InfluxData to show how metrics can be used.
It is based on plugins called `provider`. At the moment we implemented:
* Docker Swarm mode [(go to zero-conf
chapter](https://github.com/gianarb/orbiter#autodetect). look full example
under `./contrib/swarm` directory
* DigitalOcean
We designed it in collaboration with InfluxData to show how metrics can be used to
create automation around Docker tasks.
```sh ```sh
orbiter daemon -config config.yml orbiter daemon
``` ```
Orbiter is a daemon that use a YAML configuration file to starts one or more Orbiter is a daemon that exposes an HTTP API to trigger scaling up or down.
autoscaler and it exposes an HTTP API to trigger scaling up or down.
```yaml
autoscalers:
events:
provider: swarm
parameters:
policies:
php:
up: 4
down: 3
infra_scale:
provider: digitalocean
parameters:
token: zerbzrbzrtnxrtnx
region: nyc3
size: 512mb
image: ubuntu-14-04-x64
key_id: 163422
# https://www.digitalocean.com/community/tutorials/an-introduction-to-cloud-config-scripting
userdata: |
#cloud-config
runcmd:
- sudo apt-get update
- wget -qO- https://get.docker.com/ | sh
policies:
frontend:
up: 2
down: 3
```
This is an example of configuration file. Right now we are creating two
autoscalers one to deal with swarm called `events/php` and the second one with
DigitalOcean called `infra_scale`.
## Vocabulary
* `provider` contains integration with the platform to scale. It can be swarm,
DigitalOcean, OpenStack, AWS.
* Every autoscaler supports a k\v storage of `parameters` that you can use to
configure your provider.
* autoscaler` is composed by provider, parameters and policies. You can have
one or more.
* autoscaler has one or more policies that contain information about a
specific application.
You can have one or more autoscaler with the same provider. Same for
policies, one or more. Doesn't matter.
## Http API ## Http API
Orbiter exposes an HTTP JSON api that you can use to trigger scaling UP (true) Orbiter exposes an HTTP JSON api that you can use to trigger scaling UP (true)
@ -104,8 +46,6 @@ curl -v -X GET http://localhost:8000/v1/orbiter/health
``` ```
## Autodetect ## Autodetect
The autodetect mode starts when you don't specify any configuration file.
If you start oribter with the command:
``` ```
orbiter daemon orbiter daemon
@ -119,11 +59,15 @@ services currently running.
If a service is labeled with `orbiter=true` it's going to auto-register the If a service is labeled with `orbiter=true` it's going to auto-register the
service and it's going to enable autoscaling. service and it's going to enable autoscaling.
If a service is labeled with `orbiter=true` orbiter will auto-register the
service providing autoscaling capabilities.
Let's say that you started a service: Let's say that you started a service:
``` ```
docker service create --label orbiter=true --name web -p 80:80 nginx docker service create --label orbiter=true --name web -p 80:80 nginx
``` ```
When you start orbiter, it's going to auto-register an autoscaler called When you start orbiter, it's going to auto-register an autoscaler called
`autoswarm/web`. By default up and down are set to 1 but you can override `autoswarm/web`. By default up and down are set to 1 but you can override
them with the label `orbiter.up=3` and `orbiter.down=2`. them with the label `orbiter.up=3` and `orbiter.down=2`.
@ -131,33 +75,8 @@ them with the label `orbiter.up=3` and `orbiter.down=2`.
This scalability allows you to instantiate orbiter in an extremely easy way in This scalability allows you to instantiate orbiter in an extremely easy way in
Docker Swarm. Docker Swarm.
## Embeddable A background job reads the Docker Swarm Event api to keep the services
This project is trying to also provide an easy API to maintain a lot of complex registered in sync.
and clean code bases in order to allow you to use `orbiter` as project for your
applications.
OpenStack, Kubernets all of them have a sort of autoscaling feature that you can
use. The idea is to keep this complexity out of your deployment tools. You can
just implement `orbiter`.
Another use case is a self-deployed application. [Kelsey
Hightower](https://www.youtube.com/watch?v=nhmAyZNlECw) had a talk about this
idea. I am still not sure that can be real in those terms but we are already
moving something in our applications that before was in external system as
monitoring, healthcheck, so why not the deployment part?
```go
package scalingallthethings
import (
"github.com/gianarb/orbiter/autoscaler"
"github.com/gianarb/orbiter/provider"
)
func CreateAutoScaler() *autoscaler.Autoscaler{
p, _ := provider.NewProvider("swarm", map[string]string{})
a, _ := autoscaler.NewAutoscaler(p, "name-service", 4, 3)
return a
}
```
## With docker ## With docker

View File

@ -2,21 +2,20 @@ package cmd
import ( import (
"flag" "flag"
"io/ioutil"
"net/http" "net/http"
"os" "os"
"os/signal" "os/signal"
"path/filepath"
"strings" "strings"
"context" "context"
"time"
"github.com/Sirupsen/logrus" "github.com/Sirupsen/logrus"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/client" "github.com/docker/docker/client"
"github.com/gianarb/orbiter/api" "github.com/gianarb/orbiter/api"
"github.com/gianarb/orbiter/autoscaler" "github.com/gianarb/orbiter/autoscaler"
"github.com/gianarb/orbiter/core" "github.com/gianarb/orbiter/core"
"time"
) )
type DaemonCmd struct { type DaemonCmd struct {
@ -26,11 +25,9 @@ type DaemonCmd struct {
func (c *DaemonCmd) Run(args []string) int { func (c *DaemonCmd) Run(args []string) int {
logrus.Info("orbiter started") logrus.Info("orbiter started")
var port string var port string
var configPath string
var debug bool var debug bool
cmdFlags := flag.NewFlagSet("event", flag.ExitOnError) cmdFlags := flag.NewFlagSet("event", flag.ExitOnError)
cmdFlags.StringVar(&port, "port", ":8000", "port") cmdFlags.StringVar(&port, "port", ":8000", "port")
cmdFlags.StringVar(&configPath, "config", "", "config")
cmdFlags.BoolVar(&debug, "debug", false, "debug") cmdFlags.BoolVar(&debug, "debug", false, "debug")
if err := cmdFlags.Parse(args); err != nil { if err := cmdFlags.Parse(args); err != nil {
logrus.WithField("error", err).Warn("Problem to parse arguments.") logrus.WithField("error", err).Warn("Problem to parse arguments.")
@ -44,26 +41,6 @@ func (c *DaemonCmd) Run(args []string) int {
Autoscalers: autoscaler.Autoscalers{}, Autoscalers: autoscaler.Autoscalers{},
} }
var err error var err error
if configPath != "" {
config, err := readConfiguration(configPath)
if err != nil {
logrus.WithField("error", err).Warn("Configuration file malformed.")
os.Exit(1)
}
logrus.Infof("Starting from configuration file located %s", configPath)
err = core.NewCoreByConfig(config.AutoscalersConf, &coreEngine)
if err != nil {
logrus.WithField("error", err).Warn(err)
os.Exit(1)
}
} else {
logrus.Info("Starting in auto-detection mode.")
/* err = core.Autodetect(&coreEngine)
if err != nil {
logrus.WithField("error", err).Info(err)
os.Exit(0)
}*/
}
// Timer ticker // Timer ticker
timer1 := time.NewTicker(1000 * time.Millisecond) timer1 := time.NewTicker(1000 * time.Millisecond)
@ -119,17 +96,3 @@ Usage: start gourmet API handler.
func (r *DaemonCmd) Synopsis() string { func (r *DaemonCmd) Synopsis() string {
return "Start core daemon" return "Start core daemon"
} }
func readConfiguration(path string) (core.Conf, error) {
var config core.Conf
filename, _ := filepath.Abs(path)
yamlFile, err := ioutil.ReadFile(filename)
if err != nil {
return config, err
}
config, err = core.ParseYAMLConfiguration(yamlFile)
if err != nil {
return config, err
}
return config, nil
}

View File

@ -20,7 +20,6 @@ import (
func Autodetect(core *Core) error { func Autodetect(core *Core) error {
autoDetectSwarmMode(core) autoDetectSwarmMode(core)
if len(core.Autoscalers) == 0 { if len(core.Autoscalers) == 0 {
//return errors.New("we didn't detect any autoscaling group")
logrus.Info("no autoscaling group detected for now") logrus.Info("no autoscaling group detected for now")
} }
return nil return nil

View File

@ -1,38 +0,0 @@
package core
import (
"github.com/go-yaml/yaml"
)
type PolicyConf struct {
Up int `yaml:"up"` // Number of tasks to start during a scale up
Down int `yaml:"down"` // Number of tasks to start during a scale down
CoolDown int `yaml:"cooldown"` // Number of milliseconds to sleep avoidin too quick scale
}
type AutoscalerConf struct {
Provider string `yaml:"provider"`
Parameters map[string]string `yaml:"parameters"`
Policies map[string]PolicyConf `yaml:"policies"`
}
type Conf struct {
//Daemon map[string]Idontknow `yaml:"daemon"`
AutoscalersConf map[string]AutoscalerConf `yaml:"autoscalers"`
}
func createConfiguration() Conf {
conf := Conf{
AutoscalersConf: map[string]AutoscalerConf{},
}
return conf
}
func ParseYAMLConfiguration(content []byte) (Conf, error) {
config := createConfiguration()
err := yaml.Unmarshal(content, &config)
if err != nil {
return config, err
}
return config, nil
}

View File

@ -1,56 +0,0 @@
package core
import "testing"
func TestParseDocumentatinWithSingleAutoscaler(t *testing.T) {
var data = `
autoscalers:
aws-general:
provider: aws
parameters:
aws_key: 4gegxrt5hxrht6ht
aws_secret: rgxrtbxrtbrtbrt
policies:
micro:
up: 2
down: 3
3d2b152bc3f6:
up: 2
down: 3`
conf, err := ParseYAMLConfiguration([]byte(data))
t.Log(conf)
if err != nil {
t.Fatal(err)
}
if conf.AutoscalersConf["aws-general"].Policies["micro"].Up != 2 {
t.Fatal("micro expects Up equals 2")
}
}
func TestParseDocumentatinWithMultipleAutoscaler(t *testing.T) {
var data = `
autoscalers:
swarm-first:
provider: swarm
policies:
3d2b152bc3f6:
up: 2
down: 3
aws-general:
provider: aws
parameters:
aws_key: 4gegxrt5hxrht6ht
aws_secret: rgxrtbxrtbrtbrt
policies:
3d2b152bc3f6:
up: 2
down: 3`
conf, err := ParseYAMLConfiguration([]byte(data))
if err != nil {
t.Fatal(err)
}
if len(conf.AutoscalersConf) != 2 {
t.Fatal("micro expects Up equals 2")
}
}

View File

@ -1,27 +1,9 @@
package core package core
import ( import (
"fmt"
"github.com/gianarb/orbiter/autoscaler" "github.com/gianarb/orbiter/autoscaler"
"github.com/gianarb/orbiter/provider"
) )
type Core struct { type Core struct {
Autoscalers autoscaler.Autoscalers Autoscalers autoscaler.Autoscalers
} }
func NewCoreByConfig(c map[string]AutoscalerConf, core *Core) error {
scalers := autoscaler.Autoscalers{}
for scalerName, scaler := range c {
p, err := provider.NewProvider(scaler.Provider, scaler.Parameters)
if err != nil {
return err
}
for serviceName, policy := range scaler.Policies {
scalers[fmt.Sprintf("%s/%s", scalerName, serviceName)] = autoscaler.NewAutoscaler(p, serviceName, policy.Up, policy.Down, policy.CoolDown)
}
}
core.Autoscalers = scalers
return nil
}

View File

@ -1,109 +0,0 @@
package core
import (
"testing"
"github.com/gianarb/orbiter/autoscaler"
)
func TestNewCore(t *testing.T) {
core := Core{
Autoscalers: autoscaler.Autoscalers{},
}
conf := map[string]AutoscalerConf{
"first-scaler": AutoscalerConf{
Provider: "fake",
Parameters: map[string]string{},
Policies: map[string]PolicyConf{
"frontend": PolicyConf{
Up: 3,
Down: 10,
},
},
},
"second-scaler": AutoscalerConf{
Provider: "fake",
Parameters: map[string]string{},
Policies: map[string]PolicyConf{
"micro": PolicyConf{
Up: 6,
Down: 2,
},
"service": PolicyConf{
Up: 3,
Down: 1,
},
},
},
}
err := NewCoreByConfig(conf, &core)
if err != nil {
t.Fatal(err)
}
if len(core.Autoscalers) != 3 {
t.Fatalf("This core needs to have 2 autoscalers. Not %d", len(core.Autoscalers))
}
}
func TestGetSingleAutoscaler(t *testing.T) {
core := Core{
Autoscalers: autoscaler.Autoscalers{},
}
conf := map[string]AutoscalerConf{
"second": AutoscalerConf{
Provider: "fake",
Parameters: map[string]string{},
Policies: map[string]PolicyConf{
"micro": PolicyConf{
Up: 6,
Down: 2,
},
"service": PolicyConf{
Up: 3,
Down: 1,
},
},
},
}
NewCoreByConfig(conf, &core)
_, ok := core.Autoscalers["second/micro"]
if ok == false {
t.Fatal("micro exist")
}
}
func TestNewCoreWithUnsupportedProvider(t *testing.T) {
core := Core{
Autoscalers: autoscaler.Autoscalers{},
}
conf := map[string]AutoscalerConf{
"second-scaler": AutoscalerConf{
Provider: "fake",
Parameters: map[string]string{},
Policies: map[string]PolicyConf{
"micro": PolicyConf{
Up: 6,
Down: 2,
},
"service": PolicyConf{
Up: 3,
Down: 1,
},
},
},
"first-scaler": AutoscalerConf{
Provider: "lalala",
Parameters: map[string]string{},
Policies: map[string]PolicyConf{
"frontend": PolicyConf{
Up: 3,
Down: 10,
},
},
},
}
err := NewCoreByConfig(conf, &core)
if err.Error() != "lalala not supported." {
t.Fatal(err)
}
}

View File

@ -1,168 +0,0 @@
package provider
import (
"context"
"fmt"
"strconv"
"strings"
"sync"
"time"
"github.com/Sirupsen/logrus"
"github.com/digitalocean/godo"
"github.com/gianarb/orbiter/autoscaler"
"golang.org/x/oauth2"
)
type DigitalOceanProvider struct {
client *godo.Client
config map[string]string
ctx context.Context
}
func NewDigitalOceanProvider(c map[string]string) (autoscaler.Provider, error) {
tokenSource := &TokenSource{
AccessToken: c["token"],
}
oauthClient := oauth2.NewClient(oauth2.NoContext, tokenSource)
client := godo.NewClient(oauthClient)
p := DigitalOceanProvider{
client: client,
config: c,
ctx: context.Background(),
}
return p, nil
}
func (p DigitalOceanProvider) Name() string {
return "digitalocean"
}
func (p DigitalOceanProvider) Scale(serviceId string, target int, direction bool) error {
var wg sync.WaitGroup
responseChannel := make(chan response, target)
if direction == true {
for ii := 0; ii < target; ii++ {
wg.Add(1)
go func() {
defer wg.Done()
t := time.Now()
i, _ := strconv.ParseInt(p.config["key_id"], 10, 64)
createRequest := &godo.DropletCreateRequest{
Name: fmt.Sprintf("%s-%s", serviceId, t.Format("20060102150405")),
Region: p.config["region"],
Size: p.config["size"],
UserData: p.config["userdata"],
SSHKeys: []godo.DropletCreateSSHKey{{ID: int(i)}},
Image: godo.DropletCreateImage{
Slug: p.config["image"],
},
}
droplet, _, err := p.client.Droplets.Create(p.ctx, createRequest)
if err == nil {
responseChannel <- response{
err: err,
droplet: droplet,
direction: true,
}
}
}()
}
} else {
// TODO(gianarb): This can not work forever. We need to have proper pagination
droplets, _, err := p.client.Droplets.List(p.ctx, &godo.ListOptions{
Page: 1,
PerPage: 500,
})
if err != nil {
logrus.WithFields(logrus.Fields{
"provider": "digitalocean",
"error": err,
}).Warnf("Impossibile to get the list of droplets.")
return err
}
ii := 0
for _, single := range droplets {
if p.isGoodToBeDeleted(single, serviceId) && ii < target {
go func() {
defer wg.Done()
_, err := p.client.Droplets.Delete(p.ctx, single.ID)
responseChannel <- response{
err: err,
droplet: &single,
direction: false,
}
}()
wg.Add(1)
ii++
}
}
}
go func() {
var message string
for iii := 0; iii < target; iii++ {
r := <-responseChannel
if r.err != nil {
message = "We was not able to instantiate a new droplet."
if r.direction == false {
message = fmt.Sprintf("Impossibile to delete droplet %d ", r.droplet.ID)
}
logrus.WithFields(logrus.Fields{
"error": r.err.Error(),
"provider": "digitalocean",
}).Warn(message)
} else {
message = fmt.Sprintf("New droplet named %s with id %d created.", r.droplet.Name, r.droplet.ID)
if r.direction == false {
message = fmt.Sprintf("Droplet named %s with id %d deleted.", r.droplet.Name, r.droplet.ID)
}
logrus.WithFields(logrus.Fields{
"provider": "digitalocean",
"dropletName": r.droplet.ID,
}).Debug(message)
}
}
wg.Wait()
}()
return nil
}
// Check if a drople is eligible to be deleted
func (p DigitalOceanProvider) isGoodToBeDeleted(droplet godo.Droplet, serviceId string) bool {
if droplet.Status == "active" && strings.Contains(strings.ToUpper(droplet.Name), strings.ToUpper(serviceId)) {
// TODO(gianarb): This can not work forever. We need to have proper pagination
actions, _, _ := p.client.Droplets.Actions(p.ctx, droplet.ID, &godo.ListOptions{
Page: 1,
PerPage: 500,
})
// If there is an action in progress the droplet can not be deleted.
for _, action := range actions {
if action.Status == godo.ActionInProgress {
fmt.Println(fmt.Sprintf("%d has an action in progress", droplet.ID))
return false
}
}
return true
}
return false
}
type TokenSource struct {
AccessToken string
}
func (t *TokenSource) Token() (*oauth2.Token, error) {
token := &oauth2.Token{
AccessToken: t.AccessToken,
}
return token, nil
}
type response struct {
err error
droplet *godo.Droplet
direction bool
}

View File

@ -1,24 +0,0 @@
package provider
import (
"errors"
"fmt"
"github.com/gianarb/orbiter/autoscaler"
)
func NewProvider(t string, c map[string]string) (autoscaler.Provider, error) {
var p autoscaler.Provider
var err error
switch t {
case "swarm":
p, err = NewSwarmProvider(c)
case "digitalocean":
p, err = NewDigitalOceanProvider(c)
case "fake":
p = FakeProvider{}
default:
err = errors.New(fmt.Sprintf("%s not supported.", t))
}
return p, err
}

View File

@ -1,17 +0,0 @@
package provider
import "testing"
func TestUnsupportedProvider(t *testing.T) {
_, e := NewProvider("will-not-exists-never-1546456", map[string]string{})
if e.Error() != "will-not-exists-never-1546456 not supported." {
t.Errorf("We expect an error because will-not-exists-never-1546456 is not supported")
}
}
func TestCreateFakeProvider(t *testing.T) {
_, e := NewProvider("fake", map[string]string{})
if e != nil {
t.Error(e)
}
}