Github link: shared.python.serialize

This is a wrapper around json. It serializes your data objects and is able to load them form disk as well. It’s pretty handy.

import shared.python.serialize as pyjson
to_disk ="C:\path\file.json", {"data":0})  # save to disk
from_disk = pyjson.load(r"C:\path\file.json")  # get from saved

shared.python.file 1.0

Github link: shared.python.file

I often get mixed up with what file and folder operations are in os.path and in shutil. I also feel that the os module is not very readable. So, for the sake of readability, I like to add a few functions that help out with file and folder operations.


One example of how unreadable os.path can be is getting a file name.

file_name = os.path.dirname(r"C:\path\fileName.ext")

Pretty horrible, no? I mean the function is called dirname as if we were looking a the name of a directory!

With name(), we can get the name of a file in a much more readable way. And, since we bothered with this, we can get the name of the file without the extension like so:

import shared.python.file as pyfile
file_name ="C:\path\fileName.ext", include_ext=False)


This function will return all files within a folder. You can obviously do this in line, but its really nice to have it as a one line function.

To make this function, I started with the line that gets me the list of files:

for (dirpath, dirnames, filenames) in os.walk(path):
    files.extend([os.path.join(dirpath, x) for x in filenames])

That gives me a list of all the files in all the folders in the given path. That’s useful unless you only want the files in the path, and not in all the sub-folders. So we can add an if for that case:

for (dirpath, dirnames, filenames) in os.walk(path):
    files.extend([os.path.join(dirpath, x) for x in filenames])
    if not recursive:

At this point we now have the correct list of files. This trick I learned from a friend at work. He uses the break command in interesting ways. In this case, it breaks the os.walk() command right after the first loop. This means that it skips all the sub-folders, leaving the list with the files in the base folder only.

Lastly, filter files to the given extensions. I like to feed the extensions as a list. This way, you can assemble your extensions in a loop. So assume the extensions are [“ma”, “mb”]:

for (dirpath, dirnames, filenames) in os.walk(path):
    files.extend([os.path.join(dirpath, x) for x in filenames])
    if not recursive:

if extension:
    filtered = list()
    extenstions = pyutils.make_list(extension)
    for e in extenstions:
        e = "." + e.replace(".", "")
    for i, file_name in enumerate(files):
        add = False
        for e in extenstions:
            if file_name.endswith(e):
                add = True
        if add:
    files = filtered

The first loop makes sure that all extensions start with a “.”.

The second loop adds the files to filtered only if they have the right extension.

And this is how the function is used:

import shared.python.file as pyfile
maya_files = pyfile.list_files(r"C:\path", extension=["ma","mb"], recursive=True)

For the complete code, please refer to the function in git (shared.python.file)

If you have any doubts or questions about any of the functions or the module itself, please add it to the comments. I’ll be happy to add any clarification to this post or to change the module itself (where it makes sense).


This module is quite short, but useful when dealing with multiple users. It’s used to identify the user, IP, and computer. I also have a wrapper function for subprocess, but I’m not fully convinced that it belongs in there.

Most of the discussion around this module will be around readability. Fortunately, I have a post on Python Readability already written. So I’m going to continue under the assumption that you agree with me on the importance of readability.


Let me know how you feel about this function. I added it because a friend of mine uses it that way, and we were working on a core repo together one time and I dont have a real reason to remove it. So I’ve now been adding it to every core library I write.

The reality is that Python does not really have a way to do true asynchronous processes. The only way is to launch separate Python apps. This is why I have kept the wrapper here. But I feel that if its worth making this function, its worth making a full wrapper for subprocess. But i have not done that. Maybe I should. Let me know in the comments if you think it’s worth while and I will put it on the list of modules we will add to our core libraries.

Other than that, I think it’s all pretty self explanatory. Let me know in the comments if you need any clarifications, or if I’m missing something, or if I’m just nuts.

Ok, see you next week.

// Isoparm

shared.python.utils 1.0

Github link: shared.python.utils

Welcome to the second post on python utils. Lets get to it, shall we?


To make an empty list with a specific number of items looks like this:

# generates list full of None values: 
empty_list = [] * number_of_items

While its not terrible, its not great either. I made this function so I could fill the list with something specific. Like so:

empty_list = pyutils.empty_list(number_of_items=0, default_item=True)

time_it() and count_it()

These two are useful decorators. What is a decorator you ask? Think of it as writing a generic wrapper for a function, but you leave the function variable. If you want a step by step on how to make decorators, please visit my decorator discussion page.


I suppose this is where we get specific to problems I’ve had. I needed to sort all items in a list by how many items there are. for example:

data = get_sorted_by_most_common(

# result:
# ["Mouth", "Up", "Corner"]

I’m aware this is not a common thing, but hey, if you ever have a similar issue, here it is.

That’s mostly it. Please ask questions if you have any. Leave a comment. Let me know how I’m doing. Thank you all.


shared.python.utils 1.1

Github link: shared.python.utils

Hello, and welcome to our first module. In this one we will make the most generic scripts we can think of. These are meant to help us in writing Python. They should not have any other dependencies other than Python or other well established Python libraries.


This function converts any python object into a list(). This is particularly useful when dealing with packages such as Maya and 3DsMax. Many functions in those packages will return either a list() or a str(), depending on how many entries are returned. This is super annoying. You end up having to test the result to see if its a list() every time (ugh!). After writing that check seven times, I gave up.

Another added benefit of this function is that it can help you in making functions that take one or more items as arguments. For example:

# you can call a log function with a text:
log("Tell me something")

# or with an array:
log(["Tell me this", "Tell me that"])

Here is how that function would work:

import shared.python.utils as pyutils
# new_list == ["string"] 
new_list = pyutils.make_list("string")

# new_list == ["string"]
new_list = pyutils.make_list(["string"])


Have you noticed that some functions return False when they should return an empty string? Check this example:

results = list()

for item in list_of_items:
    results =+ imported.get_value(item)

Imagine that we dont own ‘imported’, its a imported library. And imagine that get_values() returns False if there are no values to get from item. If item ever has no values to return, you would get an error that looks like this:

TypeError: can only concatenate list (not "bool") to list

The solution is simple but annoying. After writing it several times, I decided to add it to my utils module.

This is how you would use it:

import shared.python.utils as pyutils

# new_list = ["one", "two", "three", "four"]
new_list = pyutils.join_lists(["one","two"],["three", False], ["four"])


This one exists only for readability’s sake. Its not difficult to do this in a single line, but its hard-ish to read.

# its not easy to read this:
sans_duplicates = list(set(objects))

# this is much easier:
sans_duplicates = pyutils.remove_duplicates(objects)


I find myself looking checking a list to see if it has a bunch of items. I do this often. You can use the set.intersection() function, but it’s hard to read. It ends up looking like this:

# how am I supposed to know what this means???
if len(set(full_list).intersection(items)):

# much easier to read, I think.
if pyutils.are_items_in_list(items, full_list):

I suspect you begin to see a pattern. I always try to make code more readable. In many cases I over do it, I know. However, when I go back to my code weeks, months and years later, I don’t struggle with it so much. So I feel it’s worth it.

I think that’s enough for today. I will deal with more of this module on the next post.

Let me know in the comments if this is a helpful format. Also, if you have any questions about any of the modules mentioned, or any of the ones not mentioned, please dont hesitate.

As always, I welcome any suggestions.

// Isoparm