Thoughts on the PSF, Introduction

by phildini on June 14, 2016


The Python Software Foundation (PSF) is the non-profit that owns python.org, helps run PyPI, and makes sure PyCon happens. This is the introduction to a series of posts that will discuss some challenges that face the PSF and community as a whole, as well as some suggested solutions.

The big idea underlying all the little ideas in the following posts is this: The Python community is a unique and incredible community, and it is a community that I want to see grow and improve.

Python is full of welcoming, caring people, and that the Python community has shown over and over that it is not content to rest with any past good deeds, but is continually pushing to be more welcoming and more diverse. It was an incredibly powerful symbol to me that I spoke with multiple people at PyCon who don’t currently use Python for their jobs, but come to PyCon to be a part of the community. When I find people who want to get into programming, I point them at Python partially because I think the language is more beginner-friendly than most, but mostly because I know the community is there to support them.

The only qualification I claim for this series is caring deeply about this incredible community. If you want to learn more about my background, check out the about page. The ideas that I’m going to be presenting are a combination of my own thoughts, and conversations I’ve had at various conferences, and in IRC channels, and on mailing lists. I’m not claiming to be most qualified to speak on these things.

I have no real desire to critique the past. My goal is to start a conversation about the PSF’s future, a future which hopefully sees the PSF taking an even bigger role in supporting the community. To that end, there’s three things that I think we should be talking about, which I’ll discuss over the next three posts.

  • Strengthening the Python ecosystem
  • Encouraging new adoption of Python and new Python community members
  • Supporting the existing Python community

If you are inspired to start these conversations, comments will be open on these posts, although I will be moderating heavily against anything the devolves into attacks. Assume the the PyCon Code of Conduct applies. I would be thrilled if these posts started discussion on the official PSF mailing lists, or in local user groups, or among your friends. 

In the upcoming post, I’ll talk about challenges that face the Python ecosystem. I’ll talk about support and maintenance of the Python Package Index, why it should matter tremendously to the Python community, and what the community and the PSF could be doing to better support PyPI and package maintainers. Sign up for our mailing list to hear about the next post when it’s published.


My Open Source Workflow

by phildini on June 7, 2016


I think people have an impression that I make lots of contributions to Open Source (only recently true), and that therefore I am a master of navigating the steps contributing to Open Source requires (not at all true).

Contributing to Open Source can be hard. Yes, even if you’ve done it for a while. Yes, even if you have people willing to help and support you. If someone tries to tell you that contributing is easy, they’re forgetting the experience they’ve gained that now makes it easy for them.

After much trial and error, I have arrived at a workflow that works for me, which I’m documenting here in the hopes that it’s useful for others and in case I ever forget it.

Let’s say you want to contribute to BeeWare’s Batavia project, and you already have a change in mind. First you need to get a copy of the code.

 

Image of arrow pointing to the "fork" button on the Batavia repo.

 

I usually start by forking the repository (or “repo”) to my own account. “Forking” makes a new repo which is a copy of the original repo. Once you fork a repo, you won’t get any more changes from the original repo, unless you ask for them specifically (more on that later).

Now I have my own copy of the batavia repo (note the phildini/batavia instead of pybee/batavia)


Image of an arrow pointing at the batavia repo name on phildini's GitHub account.

 

To get the code onto my local machine so I can start working with it, I open a terminal, and go to the directory where I want to code to live. As an example, I have a “Repos” directory where I’ve checked out all the repos I care about.

cd Repos
git clone [email protected]:phildini/batavia.git

This will clone the batavia repo into a folder named batavia in my Repos directory. How did I know what the URL to clone was? Unfortunately, GitHub just changed their layout, so it’s a bit more hidden than it used to be.

 

The GitHub clone popup

 

Now we have the code checked out to our local machine. To start work, I first make a branch to hold my changes, something like:

git checkout -b fix-class-types

I make some changes, then make a commit with my changes.

git commit -av

The -a flag will add all unstaged files to the commit, and the -v flag will show a diff in my editor, which will open to let me create the commit message. It’s a great way to review all your changes before you’ve committed them.

With a commit ready, I will first pull anything that has changed from the original repo into my fork, to make sure there are no merge conflicts.

But wait! When we forked the repo, we made a copy completely separate from the original, and cloned from that. How do we get changes from the official repo?

The answer is through setting up an additional remote server entry.

If I run:

git remote -v

I see:

origin	[email protected]:phildini/batavia.git (fetch)
origin	[email protected]:phildini/batavia.git (push)

Which is what I would expect -- I am pulling from my fork and pushing to my fork. But I can set up another remote that lets me get the upstream changes and pull them into my local repo.

git remote add upstream [email protected]:pybee/batavia

Now when I run:

git remote -v

I see:

origin	[email protected]:phildini/batavia.git (fetch)
origin	[email protected]:phildini/batavia.git (push)
upstream	[email protected]:pybee/batavia.git (fetch)
upstream	[email protected]:pybee/batavia.git (push)

So I can do the following:

git checkout master
git pull upstream master --rebase
git push origin master --force
git checkout fix-class-types
git rebase master

These commands will:

  1. Check out the master branch
  2. Pull changes from the original repository into my master branch
  3. Update the master branch of my fork of the repo on GitHub.
  4. Checkout the branch I’m working on
  5. Pull any new changes from master into the branch I’m working on, through rebasing.

Now that I’m sure my local branch has the most recent changes from the original, I push the branch to my fork on github:

git push origin fix-class-types

With my branch all ready to go, I navigate to https://github.com/pybee/batavia, and GitHub helpfully prompts me to create a pull request. Which I do, remembering to create a helpful message and follow the contributing guidelines for the repo.

That’s the basic flow, let’s answer some questions.

Why do you make a branch in your fork, rather than make the patch on your master branch?

  • GitHub pull requests are a little funny. From the moment you make a PR against a repo, any subsequent commits you make to that branch in your fork will get added to the PR. If I did my work on my master, submitted a PR, then started work on something else, any commits I pushed to my fork would end up in the PR. Creating a branch in my fork for every patch I’m working on keeps things clean.

Why did you force push to your master? Isn’t force pushing bad?

  • Force pushing can be very bad, but mainly because it messes up other collaborator’s histories, and can cause weird side effects, like losing commits. On my fork of a repo, there should be no collaborators but me, so I feel safe force pushing.  You’ll often need to force push upstream changes to your repo, because the commit pointers will be out of sync.

What if you need to update your PR?

  • I follow a similar process, pulling changes from upstream to make sure I didn’t miss anything, and then pushing to the same branch again. GitHub takes care of the rest.

What about repos where you are a Core Contributor or have the commit bit?

  • Even when I’m a Core Contributor to a repo, I still keep my fork around and make changes through PRs, for a few reasons. One, it forces me to stay in touch with the contributor workflow, and feel the pain of any breaking changes. Two, another Core Contributor should still be reviewing my PRs, and those are a bit cleaner if they’re coming from my repo (as compared to a branch on the main repo). Three, it reduces my fear of having a finger slip and committing something to the original repo that I didn’t intend.

That’s a good overview of my workflow for Open Source projects. I’m happy to explain anything that seemed unclear in the comments, and I hope this gives you ideas on how to make your own contribution workflow easier!


Tips for Becoming a Core Contributor

by phildini on June 5, 2016


During the PyCon 2016 Sprints, I was made a Core Contributor to the BeeWare project, and was given the ‘commit bit’ on Batavia, an implementation of the Python virtual machine written in Javascript. A friend of mine who works with the PDX PyLadies and regularly encourages people to contribute to Open Source saw this, and asked that I write a blog post on becoming a Core Contributor to Open Source projects.

It’s true that, for many projects, how you become a Core Contributor can seem mysterious. It often seems unclear what a Core Contributor even does, and it doesn’t help that each Open Source project has a slightly different definition of the responsibilities of a Core Contributor.

So this deliberately isn’t a “How to Become a Core Contributor” guide. It would be impossible to write such a guide and be definitive. This is me trying to reverse engineer how I became a Core Contributor on BeeWare and then extracting out things I think are good behaviors for getting to that stage.

How I Became a Core Contributor to BeeWare:

  1. Met Russell Keith-Magee at DjangoCon EU 2016, where he spoke about BeeWare and Batavia.

  2. Chatted with Russell about BeeWare, sprinted some on Batavia at DjangoCon EU 2016.

  3. Saw Russell and Katie McLaughlin at PyCon 2016, chatted more about BeeWare with both of them, joined the BeeWare sprint.

  4. Recognized that BeeWare had some needs I could fill, namely helping onboard new people and reviewing Pull Requests.

  5. Asked Russell for, and received, the ‘commit bit’ on the Batavia project so I could help review and merge PRs.

Tips I Can Give Based on My Experience:

  • Be excited about the project and the project’s future. I think the whole BeeWare suite has amazing potential for pushing Python to limits it hasn’t really reached before, and I want to see it succeed. A Core Contributor is a caretaker of a project’s future, and should be excited about what the future holds for project.

  • Be active in the community. Go to conferences and meetups when you can, join the mailing lists and IRC channels, follow the project and the project maintainers on Twitter. I met Russell and Katie at a conference, then kept in touch via various IRC and twitter channels, then hung out with them again at another conference. Along the way, I was tracking BeeWare and helping where I could.

  • Be friendly with the existing project maintainers and Core Contributors. It’s less likely I would be a Core Contributor if I wasn’t friends with Russell and Katie, but the way we all became friends was by being active in the community around Python, Django, and BeeWare. One way to figure out if you want to be a Core Contributor on a project is to see which projects and project maintainers you gravitate towards at meetups and conferences. If there’s a personality match, you’re more likely to have a good time. If you find yourself getting frustrated with the existing Core Contributors that’s probably a sign you’ll be more frustrated than happy as a Core Contributor to that project. It’s totally fine to walk away, or find other ways to contribute.

  • Focus on unblocking others. I still make individual code contributions to BeeWare projects, but I prioritize reviewing and merging pull requests, and helping out others in the community. From what I’ve seen, a Core Contributor’s time is mainly one of: Triaging issues in the issue tracker, reviewing patches or pull requests, and helping others. It’s only when everyone else is unblocked that I start looking at my own code contributions.

  • Have fun. I asked to become a Core Contributor to BeeWare because I enjoy the community, enjoy Russell’s philosophy on bringing on newcomers, and think the project itself is really neat. If you’re having fun, it’s obvious, and most Core Contributors want to promote the people who are on fire for a project.

My hope is that I have made becoming a Core Contributor to an Open Source project seem achievable. It is completely achievable, no matter your current skill level. There’s a lot more detail I didn’t cover here, and I can’t promise that if you do all these things you’ll become a Core Contributor, even on the BeeWare project. When you ask to become a Core Contributor to a project, the existing project maintainers are evaluating all kinds of things, like how active you are, how well you might mesh with the existing team, and what existing contributions you’ve made to the project and the community. It might not be a great fit, but it doesn’t mean you’re not a great person.

What I can say is that being a Core Contributor is work, hard work, but incredibly rewarding. Seeing someone make their first contribution, and helping shepherd that contribution to acceptance, is more rewarding for me than making individual contributions. Seeing a project grow, seeing the community grow around a project, makes the work worth it.

If you want have questions about my experience, or about contributing to Open Source in general, I'm happy to answer questions in the comments, or on twitter @phildini, or email [email protected].


Self-Importance

by phildini on May 13, 2016


Originally, I was going to start this post with:

Humans have a tendency to over-attribute our own importance.

But then I realized by starting the post that way, I was being incredibly guilty of the very thing I was saying. I mean, read that sentence again. I was getting ready to start a blog post by pretending to speak for all of humanity. That's like god-level delusions of self-importance there. So. Let's try one more time.

I have a tendency to over-attribute my own importance. This manifests itself most often in thinking that the way people act around me has something to do with me. I'll meet a friend on the street, or have an interaction with someone at work, and if it doesn't go the way I'm planning, or they seem upset, the conclusion I'll immediately jump to is that I did something wrong, or that they don't like me. On the one hand, this seems like a form of social anxiety, that I'm trying to please all the people around me in an attempt to make and keep friends. And I'm not saying I don't have that going on, and it's a struggle that's being fought in my head a whole lot of the time, but let's take a step back. 

How egotistical do I have to be to start by thinking that I am the sole driver of how someone else behaves?

It is totally possible that I am doing something or saying something to cause these weird social interactions, but in order to be fair, to treat the other person as a f%&$ing human being with some measure of agency in their own lives, I need to allow that at least fifty percent of their reaction to any given situation comes from what's going on in their heads, and has nothing to do with me at all. I say "everyone is the hero in their own story" so often that it's almost a damn catchphrase, but when it comes to dealing with the people in my own life I rarely stop to think through what that means.

If I am doing my best to be a decent human being, and treating the people around with me respect, then whether or not any given interaction goes well is basically out of my control. I should, we all should, be trying to treat other people with a baseline of respect, and not attributing to malice that which can be described by ignorance (excepting blatant -isms. F--- you HB2!), but I should also remember the flip side: Sometimes people have bad days, or don't like me, and that's not always my fault or under my control.

To think otherwise is pure ego.

Looping back to how I was going to start this post, I think I'm not the only one who has trouble with this. A theme among people I talk to, especially people who live on the internet, is that they attribute good social interactions to the other person, and take all the blame for the bad interactions on themselves. That is self-loathing, and self-importance, and I hope I can remember to do better should we ever meet (again).

tl;dr: I should examine my words and actions to make sure they meet my own standards, and remember that people are entitled to their own lives and reactions.

 


Using Django Channels as an Email Sending Queue

by phildini on April 8, 2016


Channels is a project by led Andrew Godwin to bring native asynchronous processing to Django. Most of the tutorials for integrating Channels into a Django project focus on Channels' ability to let Django "speak WebSockets", but Channels has enormous potential as an async task runner. Channels could replace Celery or RQ for most projects, and do so in a way that feels more native.

To demonstrate this, let's use Channels to add non-blocking email sending to a Django project. We're going to add email invitations to a pre-existing project, and then send those invitations through Channels.

First, we'll need an invitation model. This isn't strictly necessary, as you could instead pass the right properties through Channels itself, but having an entry in the database provides a number of benefits, like using the Django admin to keep track of what invitations have been sent.

from django.db import models
from django.contrib.auth.models import User


class Invitation(models.Model):

    email = models.EmailField()
    sent = models.DateTimeField(null=True)
    sender = models.ForeignKey(User)
    key = models.CharField(max_length=32, unique=True)

    def __str__(self):
        return "{} invited {}".format(self.sender, self.email)

We create these invitations using a ModelForm.

from django import forms
from django.utils.crypto import get_random_string

from .models import Invitation


class InvitationForm(forms.ModelForm):

    class Meta:
        model = Invitation
        fields = ['email']

    def save(self, *args, **kwargs):
        self.instance.key = get_random_string(32).lower()
        return super(InvitationForm, self).save(*args, **kwargs)

Connecting this form to a view is left as an exercise to the reader. What we'd like to have happen now is for the invitation to be sent in the background as soon as it's created. Which means we need to install Channels.

pip install channels

We're going to be using Redis as a message carrier, also called a layer in Channels-world, between our main web process and the Channels worker processes. So we also need the appropriate Redis library.

pip install asgi-redis

Redis is the preferred Channels layer and the one we're going to use for our setup. (The Channels team has also provided an in-memory layer and a database layer, but use of the database layer is strongly discouraged.) If we don't have Redis installed in our development environment, we'll need instructions for installing Redis on our development OS. (This possibly means googling "install redis {OUR OS NAME}".) If we're on a Debian/Linux-based system, this will be something like:

apt-get install redis-server

If we're on a Mac, we're going to use Homebrew, then install Redis through Homebrew:

brew install redis

The rest of this tutorial is going to assume we have Redis installed and running in our development environment.

With Channels, redis, and asgi-redis installed, we can start adding Channels to our project. In our project's settings.py, add 'channels' to INSTALLED_APPS and add the channels configuration block.

INSTALLED_APPS = (
    ...,
    'channels',
)

CHANNEL_LAYERS = {
    "default": {
        "BACKEND": "asgi_redis.RedisChannelLayer",
        "CONFIG": {
            "hosts": [os.environ.get('REDIS_URL', 'redis://localhost:6379')],
        },
        "ROUTING": "myproject.routing.channel_routing",
    },
}

Let's look at the CHANNEL_LAYERS block. If it looks like Django's database settings, that's not an accident. Like we have a default database defined elsewhere in our settings, here we're defining a default Channels configuration. Our configuration uses the Redis backend, specifies the url of the Redis server, and points at a routing configuration. The routing configuration works like our project's urls.py. (We're also assuming our project is called 'myproject', you should replace that with your project's actual package name)

Since we're just using Channels to send email in the background, our routing.py is going to be pretty short.

from channels.routing import route

from .consumers import send_invite

channel_routing = [
    route('send-invite',send_invite),
]

Hopefully this structure looks somewhat like how we define URLs. What we're saying here is that we have one route, 'send-invite', and what we receive on that channel should be consumed by the 'send_invite' consumer in our invitations app. The consumers.py file in our invitations app is similar to a views.py in a standard Django app, and it's where we're going to handle the actual email sending.

import logging
from django.contrib.sites.models import Site
from django.core.mail import EmailMessage
from django.utils import timezone

from invitations.models import Invitation

logger = logging.getLogger('email')

def send_invite(message):
    try:
        invite = Invitation.objects.get(
            id=message.content.get('id'),
        )
    except Invitation.DoesNotExist:
        logger.error("Invitation to send not found")
        return
    
    subject = "You've been invited!"
    body = "Go to https://%s/invites/accept/%s/ to join!" % (
            Site.objects.get_current().domain,
            invite.key,
        )
    try:
        message = EmailMessage(
            subject=subject,
            body=body,
            from_email="Invites <invites@%s.com>" % Site.objects.get_current().domain,
            to=[invite.email,],
        )
        message.send()
        invite.sent = timezone.now()
        invite.save()
    except:
        logger.exception('Problem sending invite %s' % (invite.id))

Consumers consume messages from a given channel, and messages are wrapper objects around blocks of data. That data must reduce down to a JSON blob, so it can be stored in a Channels layer and passed around. In our case, the only data we're using is the ID of the invite to send. We fetch the invite object from the database, build an email message based on that invite object, then try to send the email. If it's successful, we set a 'sent' timestamp on the invite object. If it fails, we log an error.

The last piece to set in motion is sending a message to the 'send-invite' channel at the right time. To do this, we modify our InvitationForm

from django import forms
from django.utils.crypto import get_random_string

from channels import Channel

from .models import Invitation


class InvitationForm(forms.ModelForm):

    class Meta:
        model = Invitation
        fields = ['email']

    def save(self, *args, **kwargs):
        self.instance.key = get_random_string(32).lower()
        response = super(InvitationForm, self).save(*args, **kwargs)
        notification = {
            'id': self.instance.id,
        }
        Channel('send-invite').send(notification)
        return response

We import Channel from the channels package, and send a data blob on the 'send-invite' channel when our invite is saved.

Now we're ready to test! Assuming we've wired the form up to a view, and set the correct email host settings in our settings.py, we can test sending an invite in the background of our app using Channels. The amazing thing about Channels in development is that we start our devserver normally, and, in my experience at least, It Just Works.

python manage.py runserver

Congratulations! We've added background tasks to our Django application, using Channels!

Now, I don't believe something is done until it's shipped, so let's talk a bit about deployment. The Channels docs make a great start at covering this, but I use Heroku, so I'm adapting the excellent tutorial written by Jacob Kaplan-Moss for this project.

We start by creating an asgi.py, which lives in the same directory as the wsgi.py Django created for us.

import os
import channels.asgi

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
channel_layer = channels.asgi.get_channel_layer()

(Again, remembering to replace "myproject" with the actual name of our package directory)

Then, we update our Procfile to include the main Channels process, running under Daphne, and a worker process.

web: daphne myproject.asgi:channel_layer --port $PORT --bind 0.0.0.0 -v2
worker: python manage.py runworker --settings=myproject.settings -v2

We can use Heroku's free Redis hosting to get started, deploy our application, and enjoy sending email in the background without blocking our main app serving requests.

Hopefully this tutorial has inspired you to explore Channels' background-task functionality, and think about getting your apps ready for when Channels lands in Django core. I think we're heading towards a future where Django can do even more out-of-the-box, and I'm excited to see what we build!


Special thanks to Jacob Kaplan-Moss, Chris Clark, and Erich Blume for providing feedback and editing on this post.


6 Steps to Make Beginner Workshops More Beginner Friendly

by oboechick on March 30, 2016


I have spent roughly the last ten years of my life studying how to teach mathematics and music. I am not by any means an expert. I am simply a person with an educated opinion in how to teach beginners. I find that code is very similar to mathematics and music. So, I am going to put this into terms that I understand and hope they help you.

I want you to imagine you are teaching how to play Clair du Lune by Claude Debussy on the piano to an oboe player. 

  1. You would not say, “Here is the music.”

  2. “Here is a chart to show you what the notes are on the piano.”

  3. You know how to read music. Ready, set, go!”

This makes no sense. Here is a plan that would work a lot better.

  1. Make sure that the person can read the music. An oboe player only has to read one of the staffs that a piano does on a regular basis and so they may only read one of the staffs.

  2. Teach them the fundamentals of how to play the piano. A piano and an oboe are completely different. One major difference is that for the oboe, dynamics or volume, is controlled by air. On the piano it is controlled by how hard you press the key. It doesn’t matter how hard you press the keys of an oboe, it will not make it louder.

  3. Break the music down several times to make sure that the music is possible.

    1. Each hand on the piano is playing something different. So we start by breaking the music up into a few bars at a time and only work with one hand at a time.

    2. After the student knows what each hand is doing separately, you take it a few bars at a time putting the two hands together adding more in each lesson until you reach the end of the piece.

  4. Play the whole piece and enjoy!

    1. Note that this could take a week or a year. The secret is to know that it is ok for it to take as long as you need. It doesn't mean that you are stupid or that you will never be able to do it, it just means that you need more time than others.

When I walk into a workshop at a tech conference the first words I usually hear are, “Here are a set of directions, follow them and you can make (fill in the blank)!”. It is nice, each person is able to follow the instructions at their own pace. There can be two people or fifty people in the room and the workshop leader can walk around and answer questions without feeling too overwhelmed by the number of people present. This is an optimal way to teach something given limited time, unknown space, and differing skill levels of the attendees.

There is a lot that is being done very well. The people are very friendly and higher level attendees help those who are lower level. However, for a person who is looking at coding for close to the first time or a person who is not familiar with the programs being used it can be overwhelming if there is a lot of information thrown at them all at once.  I am going to use the experience I had in my first workshop because it has stuck with me the most. I will not name that person who ran the workshop or the conference the workshop was held at because I do not want there to be backlash on this person or the conference organizers. I find the person who ran this workshop is very smart, it was simply their first time running a workshop and the con is one of my favorites.

That said there are a few but very important steps that those who are running these workshops can do that will make it so that beginners feel that what they are working on is doable.

  1. There is no question that is stupid or boring.

    1. When you teach something, questions are the metric to show you how well you, as a teacher, are doing. You want to make the environment you are in safe. If you make someone believe that their question is stupid they are less likely to ask more questions which could result in them quitting.

  2. Don’t get mad at a beginner because they did not “google” it before they asked you.

    1. Having to figure out the right question to find the correct context, which site will have the most correct answer, and having to worry about whether or not there is a reliable internet connection to even do a web search can make a beginner who is on the verge of quitting quit.

    2. I have a background in Mathematics education. For a short time I tried to go into theoretical math. This did not work out but it has influenced the way that I look for information. Some of the subjects I was trying to research did not exist on the internet. That meant I had to go the the school library and go through a couple hundred books to find one reference. If you were lucky you could find a person who had the information and could help you find the right resources. This means that I did not think about doing an internet search to look for something. I ask a person because in my experience that is the fastest way to gather information. I did not appreciate being scolded for not doing something that was not regular for me. This was the reason I stopped learning to code the first time. It was almost three years before I started again.

  3. Don’t use words like “easy”, “simple”, and “fast”.

    1. The first workshop I ever attended was a twitter bot tutorial. I do not remember which program was used for the workshop, I was going to be able to build a twitter bot and nothing else really mattered at the time. I was excited because my partner had been making a lot of twitter bots and now I would be able to make one too. The workshop was two hours long and the phrases I heard used to describe the workshop were along the lines of “this is so easy you will be done in no time”. I was given a booklet of instructions (which were online) with fifteen to twenty steps on each page and there were somewhere between ten and fourteen pages. I did not understand half of what was on the first page. I was so overwhelmed that not even ten minutes into the workshop that I left the room to go hide in the bathroom and cry. Do not imply that something is “easy” or “simple”. Not everyone will find it easy, fast, or simple. When a person sees these words they often feel that if it isn’t easy for them, they must be stupid. This can make the difference between whether a beginner quits or keeps going.

  4. Don’t assume they will know all of the terms that seem second nature to you.

    1. The hardest part about learning mathematics is the vocabulary. Depending on where you go there could be as many as ten different terms that mean the same thing. There are also a few terms that mean different things depending on the context. When planning a workshop for beginners, pretend that the person has no coding experience. This means they will not know any of the correct vocabulary.

      1. The best example I can think from my first workshop experience is the word “fork”. In the instructions I was told to fork something on github. First, I had no idea what github was and secondly, what in the world does “fork” mean!?! I knew it was an eating utensil but what did that have to do with coding? (There were a lot of curse words swirling around in my head at this point.)

      2. Another example: I once had to explain what “click” means to someone who did not speak English as a first language. Do not assume, even the most basic terminology may not be what they know

    2. Here are a few suggestions for what you can use to fix this without having to make everyone sit through you vocally explaining what every term is.

      1. Place a footnote on the page with the term explaining what it is. This small step will take time but it will help the beginner feel less overwhelmed by the amount of information you are asking them to process.

      2. Have a glossary with any term that you think may not be known by a new person. (This is the option I believe is best.) If no one needs it then they don’t have to even look at it. This would be like a glossary in the back of a history or math book with all the vocabulary words and definitions in it. You can create the glossary once and use for any workshop you do. If someone asks you about one you don’t have in your glossary, thank them, write it down, and add it in for next time. Having this glossary will make the workshop more accessible all around.‚Äč

    3. Does the definition of the word you are using match what your attendees know? Here is an example of why having a some form of glossary or footnote for the vocabulary is a really good idea, Take the word “set”.

      1. ‚ÄčThere are eight definitions in the Merriam-Webster dictionary for the English language none have anything to do with math or computers.

      2. In math there are different kinds of sets. Subsets and power sets are a few examples. But the number of rules defining whether or not a group is a set or not changes depending on the context.

      3. In computer science a set is a group of data. Some of these definitions may overlap with those in math but I am too unfamiliar with the subject to say definitively one way or the other. I have been told that there are at least five definitions in computer science.

      4. Are you confused yet? I certainly was when I was first learning set theory. This is one word that I chose, imagine what is it like looking at a whole booklet of instructions and not understanding half of what is there. Believe me when I say there have been more than a few tears shed over similar difficulties and I’m pretty sure they were not shed only by me.

    4. People will not feel stupid because you put more information than was needed into the instructions but they will feel stupid if they have no idea what a word means and they have to ask again and again. Especially if you roll your eyes then mutter under your breath that they should know it already because it is a beginning thing. This is supposed to be a workshop for beginners. They will not know everything you know. I know that I felt like I was taking a big risk by going to this workshop with my limited skills. I am guessing that there are many others who will feel the same. It will work best if you simply pretend none of your attendees know what you are talking about and go from there.

  5. Break it down more than you think is needed.

    1. The best way to prevent the feeling of being overwhelmed is to break the steps down more.

      1. As mentioned above, at the first workshop I went to I was given a booklet of instructions with fifteen to twenty steps on each page and somewhere between ten and fourteen pages. This is a lot to process all at once even if I did know all of the correct terminology. The workshop booklet that I was asked to look at felt like a calculus book and I was a student who had just started algebra 1. I felt overwhelmed and all I had done was look at the instructions.

      2. Don’t put more than three to five directions on a page. Designing the instructions into a powerpoint will help you know the right amount of information to put on a page. The less a beginner has to look at a time will make it seem more doable.

        1. If something is not blatantly obvious, add pictures with arrows. A picture of what something should look like will be a point that the student can check to make sure they are on the right track.

          1. In the instructions for the workshop, I was being asked to create a new file in the program. This was all I was given. There were no instructions for how to do it. It turned out that I had to back out of where I was, find a menu (there were five) and then click the new file option. Beginners are not likely to know where to look on a random program. Especially if you are using this program for the first time, the beginner should not be held to a standard higher than you hold yourself.

        2. Breaking it down further will help you see steps you skipped. This is something that I know I have nightmares about when I am having to plan a lesson either for student teaching or I am just being asked to teach something. Looking at something three to five steps at a time and asking yourself if anything needs to be done in between each step will help eliminate the chance of missed steps.

  6. Always have a backup plan.

    1. Know that no matter what you do, chances are you are going to have to change something (or everything). Every group of people that you work with is going to have different experiences and knowledge to pull from. This may mean that everyone who shows up are advanced skill levels, everyone is a beginner, or you have a group that is all over the board. Be ready to make changes once you see who is there, however, remember it is always easier to make an easy lesson harder but it is almost impossible to simplify a lesson on the fly.

    2. If you plan to use the internet, MAKE SURE THERE IS A RELIABLE INTERNET SOURCE! If you are unsure, download a copy onto a thumbdrive and have a plan that allows the attendees to participate in the workshop whether or not internet is available.

      1. If there is a program that the attendees need to have downloaded before hand, have a thumb drive with copies of the program on it so that the attendee doesn’t spend two hours downloading the program(s). (This is assuming there is internet to download it.)

      2. You could also try to get the conference to put a note into the description of your workshop asking the attendees to download what you are using before they come if that is possible. Not everyone will be able to do it but it may make it so that fewer people are frustrated by trying to download something with questionable internet or not having enough thumbdrives for everyone.

Just because a coder feels that the level of responsibility they have been given is more than they are capable does not necessarily mean that they are in fact, a true beginner. Do these people have more that they need to learn? Yes, the moment you stop learning is the moment your career dies. However, these coders also know the basics for coding and are therefore not true beginners. We need to stop allowing people who are beginning-intermediate, intermediate, advanced-intermediate, and advanced level coders set the bars for what a beginner is because chances are they have a hard time remembering what it was like to be a true beginner.

These six steps should help to make it so that true beginners feel more welcome at workshops while allowing higher level coders to participate. Let’s all remember that the goal is to encourage as many people as possible to code.


The Engineer's Day

by oboechick on March 23, 2016


Waiting waiting for this to run,

Ooo! I've found a site that looks fun!

Maybe if I'm really good,

I'll finish this before its done!


Follow the Yellowbrick Pavement

by phildini on March 15, 2016


Over the past couple months, it has been at times painful to watch civic discourse in Alameda. I believe the City Council has arrived at a good starting point in most of its decisions, but the path taken has often been confusing to follow.

Last night's Planning Board meeting was gust of fresh air in comparison. Two major issues were discussed by the Board: street names for the 2100 Clement Street project, and Design Review Approval for block 11, block 8, and phase 1 of the waterfront park at Alameda Point Site A.

  • The Planning Board wants to put more consideration behind the street names for the 2100 Clement project. There's a worry about the current suggestions being pronounceable to the average Alamedan, as well as a worry about the appropriateness of the some of the alternates.
  • The review for the design of block 11, block 8, and the waterfront park at Alameda Point Site A revolved around:
    • Making sure windows are up to city code
    • Close inspection of the proposed exterior construction materials
    • The color of the street in the shared plaza. Earlier sketches showed a yellowish color, last night's designs had the road returning to a more street-ish gray. Based on the comments of President Knox White and others, there's going to be more thought put into this area, as many on the Board feel the street color helps dictate how the space would be used, and the apparent preference is for it to be pedestrian-focused.
    • Discussion on the name of the street currently called West Atlantic, which would potentially be an extension of Ralph Appezzato Memorial Parkway. There's a concern that Alamedans already shorten that streetname to RAMP and so the board should consider naming the street Appezzato Parkway or Appezzato Boulevard.

(To that last point: Ralph Appezzato was the first Alameda Mayor I knew personally, and during these months of turmoil in Alameda civic discourse I find myself missing his presence strongly. I am immensely glad the Planning Board is doing what it can to keep his name in the memory of Alamedans.)

The Planning Board voted to approve the proposed design for the pieces of Alameda Point Site A mentioned above, and I believe the vote was unanimous. In both discussions, President Knox White and the other board members asked informed questions, voiced solid points, and arrived at conclusions that balanced a push for future improvements to Alameda with a sense of keeping Alameda's history and character. 

It may seem like I am being extraordinarily complementary to the Planning Board, or that I'm being far too friendly with them. To that I would say: The Planning Board accomplished the business they came there to do, including time for public comments, and did so in less than 90 minutes.

It will be interesting to see how tonight's City Council meeting compares.


A Rising Tide Lifts All Transit

by phildini on March 11, 2016


One of the most common concerns I hear about Alameda growing as a city is the congestion at our bridges and tunnels, and the difficulty people have getting on and off the island. To me, it feels like we need a comprehensive plan that increases public transit access and thinks about access to and from Alameda in terms of the whole island and the whole region. Public transit is critical if Alamedans want to maintain the quality of life the island has to offer.

The speakers at Wednesday night's City of Alameda Democratic Club meeting agree that public transit is essential. Speakers from all the transit agencies that serve Alameda spoke in turn about what they're doing already to serve the area, and how they would like to improve.

  • BART has about 430000 riders every weekday, riding on an infrastructure that was built in the 70s, and in train cars that are about as old. They want to spend 9.6 billion in improvements, mostly in purchases of new train cars and infrastructure improvements.
  • BART has about half the money they're looking for, and will most likely be putting a parcel tax or a bond measure on the ballot in November for the other half.
  • AC Transit has about 179000 riders every weekday, mostly people going to work and schoolchildren. Schoolchildren alone make up 30k of their riders. AC Transit's main goal is to increase service by working closely with the City of Alameda; they're expanding lines that run through the city and collaborating on a city transit plan.
  • AC Transit hopes to improve its service and its fleet with the funds it already has, although they are also investigating a parcel tax.
  • The Water Emergency Transportation Authority (WETA, the agency in charge of the San Francisco Bay Ferries) sees Alameda as its greatest-service city, and wants to deepen its commitment to Alameda by building a maintenance facility and another terminal on the Southwest side of Alameda.
  • WETA knows that transit to the terminals, as well as parking at the terminals, is the greatest challenge their ridership faces; they're hoping for stronger collaboration with the other agencies and the city to make it easier to get to the existing Alameda ferry terminals.
  • The West Alameda Transportation Demand Management Association (TMA) is running a series of apparently ridiculously successful shuttles from the West End of Alameda to 12th St. BART in Oakland, and wants to see their service expand as well. 
  • TMA's main focus right now seems to be on education, getting Alamedans, Alameda businesses, and the employees of Alameda businesses thinking about public transit options and how we can all better utilize public transit.

It was an information-dense first half of a meeting, to say the least. The major takeaways for me were:

  • Public transportation is on the up in Alameda, and many want to see it increase.
  • The transit agencies see themselves in cooperation, not competition. They understand their inter-connectedness to each other, and seem to want each other to thrive.
  • They're all trying to buy American and bring jobs to Alameda.

As I am unabashedly in favor of more public transit, I'm thrilled to hear about the programs currently in place, and that those programs are trying to expand. I want Alameda to be more walkable, and bikable, and I want public transit to be a deeply viable option to owning a car in Alameda.

I said above that public transit is critical to the quality of life for current Alamedans, and it will be just as critical for future Alamedans. The second half of the meeting was dedicated to presentations from property developers, specifically the organizations behind the Del Monte project, the Encinal Terminals project, and the Alameda Point Site A project.

I'm not going to go too much into the projects here, mostly because I don't have hard numbers like I have with the transit agencies, and partially because growth in Alameda, and in the Bay Area, is a thorny subject. I think more growth is good for Alameda, and I think these projects have a shot at being a massive net positive to the city. Others feel differently.

What I can say is that both projects feel public transit is a critical need for their developments to succeed, and both are putting plans in place to improve transit in their development. The group in charge of Del Monte/Encinal Terminals seems a bit more on the ball in this regard, as they talked about having an organization like the TMA (potentially joined with the TMA) to continually improve transit in that part of the city, but both groups stressed how transit would be integral to what they're building.

I've talked so far about increasing public transit because it will ease congestion, and make Alameda an even better place to live. But there's another benefit of public transit that the Alameda Point Site A group drove home for me: Getting cars off the road.

The plan for Alameda Point Site A includes raising the level of some streets and buildings, and a terraced waterfront park area. Why? Because global warming has become enough of a reality that property developers are working "sea level rise strategies" into their plans. They're so certain the seas will rise from global warming that they're betting money on it. 

Every train car BART adds, every bus or shuttle added by AC Transit or TMA, every ferry added by WETA gets cars off the road and less CO2 in our atmosphere. In a world where major corporations are now banking on global warming happening, increased public transit in Alameda can't come soon enough.