Autoresizing Persistent Disks in Compute Engine

Got a challenge the other day:

Is it possible to automatically resize a Persistent Disk in Google Compute Engine?

The answer is yes – with a few caveats.  

This solution really only works with Persistent Disks that are not root. Root disks seem to need a reboot to make this work – and automatically rebooting seems like a bad idea. So if you run it on a root disk it will work, but the extra space won’t be available until you manually reboot the machine.

Be careful with quotas. My solution here has a default max disk size of 64TB because that is the max disk that GCE disks can be. You may want to be more conservative with your limits because disk size = money. Also you have a quota on your account for the amount of SSD you can assign.  As of this writing it is 2TB.  You can always raise it, but this script cannot get around your quota, and will fail if it tries to.

All that out of the way, let’s give this a shot.

Step 1 – Script it

The first step is to put together a script that:

  • Checks the utilization of a disk.
  • If the utilization is too high, resizes the disk in Google Cloud Platform
  • Then also resizes the disk on the host OS.

There are a couple of other things we want to configure in this script:

  • What is the threshold percent that is high enough to resize the disk?
  • What is the factor by that we’ll increase the disk? Double it? Triple it?
  • What is the maximum limit to which we will increase the disk?

Keeping all of that in mind, here is my solution in Bash for Debian (our default OS choice on Compute Engine.) As you can see it’s a mix of gcloud commands and df.

#!/bin/bash

# Usage info
show_help() {
cat << EOF
Usage: ${0##*/} -d CLOUDDISK [-t THRESHOLD] [-f FACTOR] [-m MAX]
Checks the disk utilization of CLOUDDISK and if it is over the THRESHOLD
increase the disk size by multiplying current size by FACTOR as long as it
does not exceed MAX.
    -c              Check to make sure you have properly authorized service 
                    account. 
                    SUCCESS = display from gcloud compute disks list
                    FAILURE = ERROR - Insufficient Permission
    -h              Display this help and exit
    -d CLOUDDISK    The Google Cloud Disk name to check. This name can be seen
                    running 'gcloud compute disks list'
    -t THRESHOLD    The percentage (0-100) above which to resize the disk. 
                    DEFAULT 90
    -f FACTOR       The multiplier to resize the disk by. A 1GB disk with
                    a factor of 2 will be resized to 2GB. 
                    DEFAULT 2.
    -m MAX          The limit in GB beyond which we will not resize a disk. 
                    DEFAULT 6400GB.
Examples:
Run with defaults on a disk named 'storage' - 
    ${0##*/} -d storage

Check if the disk 'storage' is more than 50% usage, if so quadruple the disk 
to a limit of 1000GB 
    ${0##*/} -d storage - t 50 -f 4 -m 1000
    
EOF
}

check_perms() {
    /usr/local/bin/gcloud compute disks list

}

# Initialize our own variables:
THRESHOLD=90
FACTOR=2
MAX=64000
while getopts "d:t:m:f:hc" opt; do
    case "$opt" in
        h)
            show_help >&2
            exit
            ;;
        c)
            check_perms >&2
            exit
            ;;    
        d)  
            CLOUDDISK=$OPTARG
            ;;
        t)  
            THRESHOLD=$OPTARG
            ;;
        m)  
            MAX=$OPTARG
            ;;        
        f)  
            FACTOR=$OPTARG
            ;;
    esac
done
if [ "$CLOUDDISK" = "" ]
then
    echo "You must set a CLOUDDISK using -d option. Run ${0##*/} -h for more help. "
    exit
fi

# Get variables for scale parameters
LOCALDISK=`readlink -f /dev/disk/by-id/google-$CLOUDDISK`

# Get current usage in percentage expressed as a number between 1-100
tmp=`df $LOCALDISK | awk '{ print $5 }' | tail -n 1`
USAGE="${tmp//%}"

# Check to see if disk is over threshold. 
if [ $USAGE -lt $THRESHOLD ]
then
        echo "Disk is within threshold"
        exit
else
        echo "Disk is over threshold, attempting to resize"
fi

# Get Current size of disk
tmp2=`df -BG $LOCALDISK | awk '{ print $2 }' | tail -n 1`
CURRENTSIZE="${tmp2//G}"

# Compute next size of disk. 
PROPOSEDSIZE=$(( CURRENTSIZE * FACTOR ))
if [ $PROPOSEDSIZE -gt $MAX ]
then
        echo "Proposed disk size ($PROPOSEDSIZE)GB is higher than the max allowed ($MAX)GB."
        exit
else
        echo "Proposed disk size acceptable, attempting to resize"
fi

# RESIZE IT
ZONE=`/usr/local/bin/gcloud compute disks list $CLOUDDISK | awk '{ print $2 }' | tail -n 1`
/usr/local/bin/gcloud compute disks resize $CLOUDDISK --size "$PROPOSEDSIZE"GB --zone $ZONE --quiet

# Tell the OS that the disk has been resized.
sudo resize2fs /dev/disk/by-id/google-"$CLOUDDISK"

Source is also available in GitHub.

You can find the reference for the gcloud commands in the documentation.

Step 2 – Authorize it

The next step is to make sure this script can run at all.  To do that we have to delve into Cloud IAM.

First we want to create a service account. During this process we have the option to ‘Furnish a new private key’. This will cause a key file to be downloaded at the end of file creation. Choose JSON and keep track of the JSON file that gets downloaded after you click ‘Create’.

create_account

Add the service account to the IAM role – Compute Storage Admin. Then remove the service account from the project level role – Editor. We want it to have as little permission as it needs.    

grant_access

Copy the JSON file to the Compute Engine machine to which the disk you wish to monitor is attached.

Authorize the service account using the following command.

gcloud auth activate-service-account --key-file [YOUR KEY FILE].json

authorize

My co-worker, Sandeep, has a good video tutorial about service accounts if you need more information.

Step 3 Test it

Assuming you have installed the autoscale-disk script from step 1,  and you set up permissions correctly, you are ready to test it.  

To check the permissions, run:

autoscale-disk -c .

If you see the output of a gcloud compute disk list there, you got it right. If you do not, you will see a FAILURE message.

Step 4 – Cron it

Once you have the script installed, and you have tested it – it’s time to set it and forget it. Add it to crontab with your desired settings.

cron

I’m setting this up to check every minute, because it’s pretty lightweight when it isn’t actually resizing disks. However do what you will. You might also want to pipe the output to a log. Again, your call.

Conclusions

There you have it, autoscaling a disk based on utilization with a cron job. What I love about this idea is that it is so very cloudy. On prem, even if you have a pool of storage, eventually you run out, so sizing up a disk isn’t a sure thing.  But in a cloud world, if you need more it’s always just an API call away.

 

Migrating App Engine Standard to Cloud SQL v2

I recently discovered that Google Cloud SQL v2 now supports App Engine standard runtime.  This is very exciting for me. I wanted to try out the process and make sure there were no gotchas.

GAE+SQL

  • I created a new Cloud SQL v2 instance.
  • I used my syncing script from my blog post Migrating between Cloud SQL databases to move the data to the new instance.
  • I created a new App Engine module by store a new version of my app using the old code base.
  • I changed the connection string from the old database to the new one. The pattern to make this happen has changed a bit, more down below.
  • That was it.  The new code served up just fine. I served up on the old module until I made the connection string config tweak to the old code base.

New connection string

The new connection strings are only a little different than the old ones, and should require just a change to one string in your config.

The directions will tell you to look for your Instance connection name in the Instance properties of your Cloud SQL Developer Console. There are two patterns that these strings come in. 

  • V1 connection names follow the pattern projectid:instancename
  • V2 connection names follow the pattern projectid:regionname:instancename.

It’s a pretty simple change, but I can see someone accidentally (or willfully) not reading the documentation and getting tripped up on this. The new connection strings require region name; that’s all there is to it.  I’ve tested this on PHP. I assume it works everywhere. But your mileage may vary.  Golang tests are coming soon; I will update when I make that change.

Migrating between Cloud SQL databases

I run a bunch of SQL databases on Cloud SQL v1, and I wanted to move them over to Cloud SQL v2.   I like to automate this sort of thing.  I also like to have the new database essentially mirror the old one, until I’m ready to cut over.

I looked into writing a script that could do that with gcloud.  Turns out,  it is incredibly simple. The sql tools in gcloud can import and export directly to Cloud Storage.

PROJECT=[Project ID]
SRC_INSTANCE=[Name of source Cloud SQL instance to target]
DES_INSTANCE=[Name of destination Cloud SQL instance to target]
BUCKET=gs://[Name of Bucket set aside for temporarily storing mysqldump]
DATABASE=[MySQL Database name]

# Export source SQL to SQL file
gcloud sql instances export $SRC_INSTANCE $BUCKET/sql/export.sql 
--database $DATABASE --project $PROJECT

# Import SQL file to destination SQL
gcloud sql instances import $DES_INSTANCE $BUCKET/sql/export.sql 
--project $PROJECT 

# Delete SQL file and export logs. 
gsutil rm $BUCKET/sql/export.sql*

There you go — three lines of commands.  The only thing you need to do to make the new DB work is make sure all of the database accounts are set up correctly on the new server, otherwise application calls will bomb.

Now keep in mind that your mileage (or kilometerage) may vary.  In this case, I am going between MySQL 5.5 and MySQL 5.6, and I had no issues. If there is a reason that your old DB won’t run in the new target, it will fail.  This script also assumes that you are in the same project with appropriate permissions to all.

There’s a lot more you can do with gcloud to manage your Cloud SQL installation. Make sure to check out the rest of the documentation.

 

Working with Cloud Vision API from JavaScript

I ran into a case where I wanted to fool around with Cloud Vision API from pure JavaScript. Not node.js, just JavaScript running in a browser. There were no samples, so I figured I’d whip up some. So here is a little primer on how to do this from JavaScript in a browser.

First you have to take care of a few prerequisites:

Once you do this you’re ready to start developing. Make sure you hold on to the API key you created above.

The first thing you need to do is create an upload form.  This is pretty basic in HTML5.

<!DOCTYPE html>
<html lang="en">
<head>
	<meta charset="UTF-8">
	<title>Cloud Vision Demo</title>
	https://ajax.googleapis.com/ajax/libs/jquery/2.2.2/jquery.min.js
	http://key.js
	http://main.js
</head>
<body>
	<form id="fileform" action="">
		<select name="type" id="type">
			<option value="LANDMARK_DETECTION">LANDMARK_DETECTION</option>
		</select><br />
		<input id="fileInput" type="file" name="fileField"><br /><br />
		<input type="submit" name="submit" value="Submit">
	</form>

	
</body> </html>

Note that I’m using a select box to drive the type of detection I am doing.  There are more choices, but I’m sticking with landmark detection for now.

Next you need to convert the image to Base64 encoding to transmit the image data via a REST API. I looked around for how to do this “properly” and the best I came up with was the “easy way” mentioned in this Stack Overflow post – Get Base64 encode file-data from Input Form.

I use  readAsDataURL(). 

function uploadFiles(event) {
  event.stopPropagation(); // Stop stuff happening
  event.preventDefault(); // Totally stop stuff happening

  //Grab the file and asynchronously convert to base64.
  var file = $('#fileInput')[0].files[0];
  var reader = new FileReader()
  reader.onloadend = processFile
  reader.readAsDataURL(file);
}

function processFile(event) {
  var encodedFile = event.target.result;
  sendFiletoCloudVision(encodedFile)
}

Then I massage the content into the JSON format that the Cloud Vision API expects. Note that I strip out “data:image/jpeg;base64,”. Otherwise Cloud Vision sends you errors. And you don’t want that. 

var type = $("#type").val();

  // Strip out the file prefix when you convert to json.
  var json = '{' +
    ' "requests": [' +
    '	{ ' +
    '	  "image": {' +
    '	    "content":"' + content.replace("data:image/jpeg;base64,", "") + '"' +
    '	  },' +
    '	  "features": [' +
    '	      {' +
    '	      	"type": "' + type + '",' +
    '			"maxResults": 200' +
    '	      }' +
    '	  ]' +
    '	}' +
    ']' +
    '}';

And then I send. With the API key.  That’s it. Nothing to it really.

$.ajax({
    type: 'POST',
    url: "https://vision.googleapis.com/v1/images:annotate?key=" + api_key,
    dataType: 'json',
    data: json,
    //Include headers, otherwise you get an odd 400 error.
    headers: {
      "Content-Type": "application/json",
    },

    success: function(data, textStatus, jqXHR) {
      displayJSON(data);
    },
    error: function(jqXHR, textStatus, errorThrown) {
      console.log('ERRORS: ' + textStatus + ' ' + errorThrown);
    }
  });

If you want to dig deeper into the Cloud Vision API you can

The code for all of this is now shared in the Cloud Vision repo on GitHub.

Working with Cloud Vision API from PHP

I have been very excited by the Cloud Vision API recently put into Beta by Google Cloud Platform. I haven’t had a chance to play with it much, and I wanted to fool around with it from PHP on App Engine (or vanilla PHP for that matter), but there is no documentation for PHP yet.

So here is a little primer on how to do this from PHP on App Engine.

First you have to complete a few prerequisites:

Once you do this you’re ready to start developing. Because I am running PHP on App Engine I want the App Engine SDK for PHP.

I’m going to use the GUI to run this app, but you can use the command line just as easily.

cloud-vision-php-gaelauncher

The first thing I need to do is write a php.ini that properly allows use of cURL and has a good limit on uploaded files.

google_app_engine.enable_curl_lite = 1
upload_max_filesize = 5M

Then I set up a page named creds.php to hold my API key for Cloud Storage and my Cloud Storage Bucket name.

<?php 
//Create Bucket here 
// https://cloud.google.com/storage/docs/getting-started-console#create_a_bucket
$bucket = "YOUR BUCKET HERE";
// Get Service account API hereL 
// https://cloud.google.com/vision/docs/getting-started#setting_up_a_service_account
$api_key = "YOUR API KEY HERE ";

 ?>

Then I create a form page named index.php that creates an App Engine Upload URL for me. (If I wanted to not use App Engine, I could just skip the call to Cloud Storage Tools and post directly to the next file in the example: process.php.)

<?php
include_once("creds.php"); // Get $bucket
use googleappengineapicloud_storageCloudStorageTools;

$options = [ 'gs_bucket_name' => $bucket ];
$upload_url = CloudStorageTools::createUploadUrl('/process.php', $options);

?>

<!DOCTYPE html>
<html lang="en">
<head>
	<meta charset="UTF-8">
	<title>Cloud Vision API PHP Example</title>
</head>
<body>
	<form action="<?php echo $upload_url ?>" method="post" enctype="multipart/form-data">
	Your Photo: <input type="file" name="photo" size="25" />
	<input type="submit" name="submit" value="Submit" />
</form>
</body>
</html>

Then process.php does the hard work of taking the uploaded file, converting it to base64 and uploading to the Cloud Vision API.

<?php

include_once("creds.php"); // Get $api_key
$cvurl = "https://vision.googleapis.com/v1/images:annotate?key=" . $api_key;
$type = "LANDMARK_DETECTION";

//Did they upload a file...
if($_FILES['photo']['name'])
{
	//if no errors...
	if(!$_FILES['photo']['error'])
	{
		$valid_file = true;
		//can't be larger than ~4 MB
		if($_FILES['photo']['size'] > (4024000)) 
		{
			$valid_file = false;
			die('Your file's size is too large.');
		}

		//if the file has passed the test
		if($valid_file)
		{
			//convert it to base64
			$fname = $_FILES['photo']['tmp_name'];
			$data = file_get_contents($fname);
			$base64 = base64_encode($data);
			//Create this JSON
			$r_json ='{
			  	"requests": [
					{
					  "image": {
					    "content":"' . $base64. '"
					  },
					  "features": [
					      {
					      	"type": "' .$type. '",
							"maxResults": 200
					      }
					  ]
					}
				]
			}';

			$curl = curl_init();
			curl_setopt($curl, CURLOPT_URL, $cvurl);
			curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
			curl_setopt($curl, CURLOPT_HTTPHEADER,
				array("Content-type: application/json"));
			curl_setopt($curl, CURLOPT_POST, true);
			curl_setopt($curl, CURLOPT_POSTFIELDS, $r_json);
			$json_response = curl_exec($curl);
			$status = curl_getinfo($curl, CURLINFO_HTTP_CODE);
			curl_close($curl);

			if ( $status != 200 ) {
			    die("Error: $cvurl failed status $status" );
			}

			echo "<pre>";
			echo $json_response;
			echo "</pre>";
		}
	}
	//if there is an error...
	else
	{
		//set that to be the returned message
		echo "Error";
		 die('Drror:  '.$_FILES['photo']['error']);
	}
}
?>

Finally I have to create an app.yaml to serve up the two pages.

module: default
version: 1
api_version: 1
runtime: php55
threadsafe: yes

handlers:

# Needed for static image files

- url: /
  script: index.php

- url: /process.php
  script: process.php

Use GoogleAppEngineLauncher to start your app.

You should get this.

cloud-vision-php-form

Assuming you upload a picture from the top of the Eiffel Tower looking at the Champs de Mars, you’ll get something like this:

et

{
  "responses": [
    {
      "landmarkAnnotations": [
        {
          "mid": "/m/02j81",
          "description": "Champ de Mars",
          "score": 0.81389683,
          "boundingPoly": {
            "vertices": [
              {
                "x": 202,
                "y": 410
              },
              {
                "x": 1967,
                "y": 410
              },
              {
                "x": 1967,
                "y": 1318
              },
              {
                "x": 202,
                "y": 1318
              }
            ]
          },
          "locations": [
            {
              "latLng": {
                "latitude": 48.858249,
                "longitude": 2.294694185256958
              }
            }
          ]
        },
        {
          "mid": "/m/02j81",
          "description": "Paris",
          "score": 0.5426321,
          "boundingPoly": {
            "vertices": [
              {
                "x": 305,
                "y": 412
              },
              {
                "x": 1737,
                "y": 412
              },
              {
                "x": 1737,
                "y": 895
              },
              {
                "x": 305,
                "y": 895
              }
            ]
          },
          "locations": [
            {
              "latLng": {
                "latitude": 48.858546,
                "longitude": 2.3222419999999997
              }
            }
          ]
        },
        {
          "mid": "/g/1tc__sx0",
          "description": "France Eiffel Hotel",
          "score": 0.36458692,
          "boundingPoly": {
            "vertices": [
              {
                "x": 732,
                "y": 394
              },
              {
                "x": 1260,
                "y": 394
              },
              {
                "x": 1260,
                "y": 691
              },
              {
                "x": 732,
                "y": 691
              }
            ]
          },
          "locations": [
            {
              "latLng": {
                "latitude": 48.858362,
                "longitude": 2.294125
              }
            }
          ]
        }
      ]
    }
  ]
}

There you go, bare bones but simple Cloud Vision example in PHP.

If you want to dig deeper into the Cloud Vision API you can

The code for all of this is available on GitHub

Compute Engine and App Engine – a Comparison

STE-032.00_00_30_12.Still001

I want to show you a little demo of how Compute Engine and App Engine work. Both techs have their strengths and weaknesses, and I wanted to make something to showcase them. 

Compute Engine allows you to spin up Virtual Machines (henceforth to be referred to as “VMs” due to the fact that I can’t be bothered to write “irtual” and “achine”.) VMs give you a lot of control over your system. You can run a number of OSes, with variable processor, memory, and disk configurations. You interact with it by configuring a VM through the Developer Console or on the command line. You then SSH into your VM.

App Engine on the other hand just takes code.  You upload it and we run it. No SSH, no machine, just an upload site and a URL. App Engine by default gives you no control over the hardware running the code. The trade off is that we can immediately scale from zero load to any load you muster.

So how do these compare? App Engine scales in milliseconds? What does that look like? Compute Engine starts up in 10s of seconds? What does that mean? This demo shows off how you can build Compute Engine machines vs how fast you can spin up App Engine instances. This isn’t a one-is-better-than-the-other comparison; there are reasons to use both of these techs, and they aren’t mutually exclusive. Let me know what you think.

 

PHP on App Engine General Availability

appenginephpEarlier this week, Google Cloud Platform announced General Availability of PHP on App Engine.  Developers are now free to use App Engine to power their developer experience using…

Oh wait, you were already using PHP on App Engine. And have been doing so for a few months, or years.  What does this announcement mean for you?

The big bullet point here is that Google is taking the “Beta” label off the PHP on App Engine. It’s is now governed by the Service Level Agreement, and Deprecation Policy.

Now I’m not a lawyer, so all the rest of this is subject to, you know, me not being a lawyer, and therefore any interpretation herein, yada yada. You know, check with your lawyery people before taking my word for it. I’m mostly going to just describe these things, and point you to the actual documents.

Service Level Agreement

The SLA sets expectations for how much uptime Google Cloud Platform delivers, and what happens if they let you down.  It puts forth a number of uptime stats they need to hit, and what Google Cloud Platform will do if they do not meet them. It also outlines what you need to do to get compensation.

Read the SLA for more information.

Deprecation Policy

The Deprecation Policy states how long Google Cloud Platform will try and run services covered by the Deprecation Policy after a deprecation announcement, unless there is a very serious reason not to.

Read the Deprecation Policy, contained in section 7, for more information.

Please, read these with your lawyerly people. Provide them with Scotch, the promise of billable hours, and whatever else you need to give your lawyerly people to make them happy (Orphan tears?  I kid, I kid.  Please don’t sue me.)

This is a signal that PHP is joining the list of technologies that you can feel secure to choose Google Cloud Platform to host.

Atlassian Connect add-on Starter Kit for App Engine

atlassian_gcpAtlassian Connect add-ons are extensions written by third-party developers to augment Atlassian’s hosted software. We’d love for the Atlassian Community to host their add-ons with Google Cloud Platform.  To help out, I’m releasing Atlassian Connect Add On Starter Kit for App Engine on github.

Google App Engine is a great environment to get started hosting this sort of solution.

  • You just upload code.
  • We handle scaling for you.
  • App Engine serves up webservices with little to no config.
  • App Engine integrates well with other Google Developer Services

On the Atlassian side, as long as you write your services to respond the way Connect expects,  you can write your add-ons in whatever language or technology stack you want.

I wrote the Atlassian Connect Add On Starter Kit for App Engine to give a working example for getting started on App Engine using each of the supported languages.  Each language folder (Go, Java, Php, Python) contains code to create two applications:

Hello World – basically the Hello World example written by Atlassian with added App Engine configs to show the delta between a vanilla add-on and one configured to run on App Engine.

addon-world
The Hello World. Nothing special.

Language Check – this application scans a JIRA instance for issue titles in languages that differ from the default language, and then labels which language they are.  The idea is that you can then easily triage issues that should be routed to staff who can support that language. It takes advantage of the Google Translate API to accomplish this.

Add-on translating with the default language set to English.
Add-on translating with the default language set to Czech.

Now, all of this isn’t to say you cannot use other technologies in Google Cloud Platform to host your add-on.  You can also host on either Compute Engine or Container Engine. After you configure one of these solutions you can just follow Atlassian’s existing tutorials to create an add-on.

Anyway you want to do it, Google Cloud Platform can help the Atlassian developer community to host their applications.  If you want to write some code and have server configuration and scaling handled by us, App Engine is a good choice.  If you would rather have more control over or flexibility with the systems you run, either Compute Engine or Container Engine can be your answer.

Atlassian Connect Add On Starter Kit for App Engine

Google Cloud Platform at Google I/O

Cloudplatform-at-IOGreat news everyone: we’re going on the road! Google Cloud Platform will be running a series of events this summer called Google Cloud Platform: Next to bring a world tour level of attention to everything we are doing in the Cloud.

This week is Google I/O. The Cloud Platform team will be at Google I/O in a few sessions and events, but you may find fewer of us on the floor and speaking at Moscone Center. However my co-workers, the Developer Advocacy team, will be overflowing to space at Galvanize SF, which is just a few blocks away. The rest of our extended team is hard at work preparing for Google Cloud Platform: Next.  We can’t wait to see you all on tour.

In the meantime, you will be able to find members of the Cloud Platform team at these sessions at Google I/O:

5/28

1:00PM – 2:00PM

Room 1 (L2)

Google Cloud Messaging 3.0

5/28

1:00PM – 1:30PM

Alcove 3 (L2)

Containers to back your mobile app

5/28

1:30PM – 2:00PM

Develop Talk 3 (L2)

Mobile app quality leaps to the cloud

5/28

3:00PM – 3:30PM

Develop Talk 3 (L2)

Building a real-time app in 5 minutes with Firebase

5/29

9:00AM – 9:30AM

Earn & Engage Talk (L2)

Real-time analytics for mobile and IoT

5/29

11:30AM – 12:00PM

Alcove 3 (L2)

Containers to back your mobile app

5/29

2:00PM – 2:30PM

Earn & Engage Talk (L2)

Building a real-time app in 5 minutes with Firebase

5/29

2:30PM – 3:00PM

Design Talk (L2)

Real-time analytics for mobile and IoT

5/29

3:30PM – 4:00PM

Develop Talk 3 (L2)

Mobile app quality leaps to the cloud

As mentioned, we will also be holding an event at Galvanize, from 1-9PM on day 1 (Thursday) of Google I/O.  Members of the Cloud Platform Developer Advocate team will be there for the entire event, and speaking in the evening.

1:00PM – 5:00PM

Tech Stop
Consult with Google Cloud Platform Developer Advocates

5:30PM – 6:00PM

Presentation:
Choosing between App Engine, Compute and Managed VMs
Understand the differences between Google Cloud Platform’s computing options, to make the best choice for your app.
Terry Ryan, Developer Advocate, Google

6:00PM – 6:30PM

Presentation:
Making the real world talk to us in real-time with mobile video
Carter Maslan, CEO, Camio

6:30PM – 7:00PM

Q/A:
Google for Entrepreneurs
Mary Grove, Director of Google for Entrepreneurs

7:00PM – 9:00PM

Drinks, food, chat

Finally, we’ll be hanging out at the various satellite events and in hotel bars in the evening. Track us down on Twitter – you can find us using @googlecloud’s list on Twitter, and when you find one of us, bring your questions, share your ideas, and pester us for swag! Enjoy I/O and we can’t wait to see you at Google Cloud Platform: Next.

Managed VM Not Connecting to gcr.io on OS X

Ran into this issue today fooling around with Docker and Managed VMs on Google Cloud Platform. I was running on Mac OS X, which meant boot2docker was also in the mix. Figured this could help someone else because it baffled me for a bit.

I was trying to start up a standard runtime for python on Managed VM. I was using the following app.yaml:

module: default
runtime: python27
vm: true

api_version: 1
threadsafe: yes

resources:
  cpu: .5
  memory_gb: 1.3

manual_scaling:
  instances: 1

handlers:
- url: .*
  script: main.app

I ran the gcloud command:

gcloud preview app run ./app.yaml

In the midst of the output, this error was buried:

ERROR    2015-05-22 17:21:59,043 containers.py:283] v1 ping attempt failed
with error: Get https://gcr.io/v1/_ping: dial tcp: i/o timeout. If this
private registry supports only HTTP or HTTPS with an unknown CA certificate,
please add `--insecure-registry gcr.io` to the daemon's arguments. In the
case of HTTPS, if you have access to the registry's CA certificate, no need
for the flag; simply place the CA certificate at
/etc/docker/certs.d/gcr.io/ca.crt

What was really frustrating is that this had worked yesterday and I had changed nothing in between.  I tried a whole bunch of things. I fired a whole lot of searches on Google. In my searches I found this thread on github that suggests it’s an issue with some sort of caching in Docker. So I did the following:

  • Launched Virtual Box
  • Right clicked boot2docker-vm
  • Chose Close ->Power Off
  • Launched boot2docker again via Applications->boot2docker
  • Tried again

And now it works.

Hope this helps someone else.