˚˚ Practical Labs 2022
4CAT: Capture and Analysis Toolkit
Memespector GUI: Enriching image data with AI
Twitter studies with NodeXL Pro
Visualizing collections of images
RAWGraphs 1.0: from spreadsheet to visualization
RAWGraphs 2.0: tricks and other visual models
Placplac: a new visual format for research dissemination
AppInspect: Knowledge extraction from Android apps
Image Analysis with ImageJ: Studying TikTok Vernaculars
Meme analysis
˚˚ Recorded tutorials
Three is a trend: how to use Instagram data to better visualise a trend
Offline Image Query and Extraction Tool
Tutorials on SMART playlists
˚˚ Practical Labs 2022
1. Practical Lab |
|
#folder or slides URL: |
|
2. Facilitators |
Bernhard Rieder |
3. Short Description |
4CAT is a research tool that can be used to analyze and process data from various online social platforms. Its goal is to make the capture and analysis of data from these platforms accessible to people through a web interface, without requiring any programming or web scraping skills. 4CAT has a (growing) number of supported data sources corresponding to popular platforms, including 4chan, 8kun, Bitchute, Parler, Reddit, Telegram, and Twitter (academic and regular tracks). It also supports Facebook, Instagram, and TikTok via external data import. This practical lab will introduce the basic functionalities of 4CAT and show how it can used for academic research.ReadingsPeeters, S., & Hagen, S. (forthcoming). The 4CAT Capture and Analysis Toolkit: A Modular Tool for Transparent and Traceable Social Media Research. Computational Communication Research, Forthcoming https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3914892 |
4. Requirements |
No requirements |
1. Practical Lab | |
#folder or slides URL: | https://bit.ly/memespectorGUI |
2. Facilitators | Jason Chao & Janna Joceli Omena |
3. Short Description | Participants will learn how to exploit AI technologies to enrich image datasets. Participants will be introduced to the affordances of computer vision APIs supported by Memespector-GUI:Google Vision (proprietary)Microsoft Azure Cognitive Services (proprietary)Clarifai (proprietary)Image classifier based on Keras (open source)Memespector-GUI is a tool with graphical user interface which helps researchers invoke proprietary and open source computer vision APIs to analyse images with ease. |
4. Requirements | Preparation: In this tutorial, we recommend that the participants try invoking at least one proprietary API to process an image dataset. Participants are advised to register with one of the APIs beforehand by following the instructions for Google Cloud, Microsoft Azure or Clarifai. (Google Cloud and Microsoft Azure may ask for bank card details. They will check whether the card is active but will not charge on the card in the process of registration.) Download Memespector-GUI at https://github.com/jason-chao/memespector-gui/releases |
1. Practical Lab | |
#folder or slides URL: | https://drive.google.com/drive/folders/1cmtcPMRhxUincZFFSiLlWrR8RtsG_Ur_?usp=sharing |
2. Facilitators | Harald Meier |
3. Short Description | Social network analysis (SNA) is a powerful research method from the social sciences with many practical applications, especially when applied to complex collections of relationships such as social media data. This session will provide an overview of the SNA tool NodeXL Pro which is a plug-in for Microsoft Office Excel on Windows. No prior knowledge of social network analysis or Excel is required. Attendees will learn how to conduct a SNA with Twitter data step-by-step. This includes importing data, identifying clusters, detecting influencers, generating a content and sentiment analysis, and creating a network visualization. In addition, attendees will learn about NodeXL Pro INSIGHTS – a new tool that allows the exploration of the analyzed data in Microsoft Power BI. |
4. Requirements | 1) Please send an email to harald@smrfoundation.org to receive a free NodeXL Pro User license. You will then receive an email that contains a download link to the software and detailed installation instructions. If you are a Mac user, please note that in your email to receive access to a NodeXL Pro Cloud edition.2) Attendees should have a Twitter account in order to collect Twitter data. |
1. Practical Lab | |
#folder or slides URL: | https://bit.ly/SMART22images |
2. Facilitator(s) | Elena Aversa |
3. Short Description | This tutorial will introduce ImageJ, Imagesorter, picArrange and PixPlot and how to do research with digital images. Specifically, we will explore some techniques for the observation of a collection of images based on the spatial arrangement of its elements. |
4. Requirements | Participants need to bring their own computers and download and install ImageSorter, ImageJ, Anaconda for PixPlot and PicArrange (OS users only) |
1. Practical Lab | |
#folder or slides URL: | https://bit.ly/RAWGraphs2022SMART |
2. Facilitator(s) | Camilla De Amicis, Maria Celeste Casolino |
3. Short Description | RAWGraphs is an easy-to-use and open source data visualization framework: with this practical lab you will learn how to use it to produce beautiful and effective visualization from a variety of data formats. The lab will be divided into two parts:- a detailed introduction on RAWGraphs, visual models, data structures and much more!- a guided activity on RAWGraphs interface and on which visual models suit which kind of data structures. |
4. Requirements | Participants need to bring their own computers. We will work using Spreadsheets and RawGraphs (give a look to get familiar with it). |
1. Practical Lab | |
#folder or slides URL: | |
2. Facilitator(s) | Federico Meani, Mattia Mertens |
3. Short Description | This practical lab is an advanced version of “From spreadsheet to visualization with RAWGraphs”. You will be using RAWGraphs to produce visualization from “not-so-usual” visual models (such as treemap with images), and get hints on the new features of RAWGraphs 2.0.The lab will be divided into two parts:- a brief introduction on some theory of visual variables and visual models - some hands on and guided activities on RAWGraphs |
4. Requirements | Participants need to bring their own computers. We will work using Spreadsheets and RawGraphs. |
1. Practical Lab | |
#folder or slides URL: | TBA |
2. Facilitator(s) | Angeles Briones |
3. Short Description | This practical lab introduces a new dissemination format to store, stage, and access the results of data sprints after research activities. The format is based on a tool called Placplac, which is part of a research project. The tool is intended as a digital place that allows researchers to expose the process of doing research with digital methods. The tool puts emphasis on telling the research process, articulating the content that reinforces the use of images, visualizations, audiovisual material among others.During the workshop, participants will learn the main features of the interface. They will be guided to identify how to complete the required information. |
4. Requirements | Participants need to bring their own computers. |
1. Practical Lab | |
#folder or slides URL: | TBA |
2. Facilitator(s) | Jason Chao |
3. Short Description | Mobile devices and applications (apps) have become intimate objects in people’s daily life. This practical lab will introduce AppInspect – a new tool enabling researchers from different backgrounds to extract knowledge from Android apps. AppInspect helps researchers study user privacy and the economy in relation to apps. Research can easily identify trackers (actors), permissions and privacy-invading functionalities with AppInspect. https://appinspect.jasontc.net/ |
4. Requirements | No requirements. |
1. Practical Lab | |
#folder or slides URL: | https://drive.google.com/drive/u/2/folders/1SOKCmX9PkVg9lYoap3GyyBVM0VdfiYnc |
2. Facilitator(s) | Elena Pilipets |
3. Short Description | ImageJ is a free software tool and image processing program that can be used to measure, sort, and visualize collections of images according to their visual similarity, time of publication, and various other features. It was developed by the Software Studies Initiative to enable the exploration of patterns in image metadata through macros such as Image Plot and Image Montage. This practical lab will introduce different plotting and montage techniques for studying TikTok visual vernaculars using a collection of TikTok videos/video thumbnails and platform metadata such as timestamps, music, digg count, hashtags, etc. scraped with TikTok scraper. The focus will be on studying the temporal, aesthetic, and contextual specifics of video creation in TikTok imitation publics as exemplified through a collection of #boredinthehouse memes. In addition, participants will receive a written out ‘walkthrough’ for downloading images from a list of image URLs with DownThemAll and for installing FFmpeg (a command-line tool for extracting images frame by frame from a video file). |
4. Requirements |
1. Practical Lab | |
#folder or slides URL: | https://drive.google.com/drive/u/2/folders/1SOKCmX9PkVg9lYoap3GyyBVM0VdfiYnc |
2. Facilitator(s) | Elena Pilipets |
3. Short Description | This practical lab introduces the possibilities of exploring TikTok memes and other visual vernaculars through combinations of platform-specific image metadata and AI-driven image analysis. We will first filter and organize images according to their different digital attributes (e.g., time of posting, engagement metrics, hashtags, sounds, image captions, (text, emojis, etc.). in Google Spreadsheets and visualize a collection of images according to content similarity (e.g., using Google Vision API labels or web entities). We will then use ImageJ’s ImageMontage macro to visualize a series of memes by similarity/time frame and RawGraphs to create a visualization highlighting different memetic hierarchies and patterns of association in the same dataset. We will discuss 1. the contextual situatedness of memes provided through co-hashtag and other relations; 2. the restrictions of looking only at the content with the most exposure 3. the patterns of image adaptation over time. |
4. Requirements | Participants are recommended to attend Memespector GUI: Enriching image data with AI _before_ this practical lab. In the first part of the tutorial, we will be using Image Montage to visualize a series of TikTok memes. Please make sure you have
In the second part of the tutorial, we will be using RawGraphs. |
˚˚ Recorded tutorials
1. Practical Lab |
Three is a trend: how to use Instagram data to better visualise a trend |
#folder or slides URL: | https://youtu.be/1JVK1urhKoc |
2. Facilitator(s) | Ana Marta M. Flores |
3. Short Description | A socio-cultural trend may be based on the repetition of a similar behaviour, either on or offline. How will it be possible to identify or verify a new trend on social platforms? Considering Instagram one of the most visual and popular environments worldwide, this practical lab aims to apply a method recipe that explains the whole research process, from query design to data visualisation – through a specific example. To do so, we will perform a preliminary content analysis by identifying and combining high engagement posts and visual patterns or categories on the dataset. After, we will explore the images on ImageSorter to better present the findings. |
4. Requirements | Participants must have a basic knowledge of Google Spreadsheets, DownThemAll and Trend Studies. |
1. Practical Lab | |
#folder or slides URL: | https://bit.ly/offline-image-query-tool |
2. Facilitator(s) | Janna Joceli Omena & Jason Chao |
3. Short Description | In this tutorial participants will learn how to navigate specific collections of images located in a folder, understanding when and why this technique can facilitate the study of visual content. The Offline Image Query and Extraction Tool was created as a response to the short life of images URLs, allowing researchers to explore and navigate visual content according to its different characteristics, such as:Image query according to engagement metrics, e.g. shares, comments, views, likes, re-tweets, etc. (the site of image audiencing)Image query according to user accounts or link domains (the site of image creation or appearance)Image query according to computer vision outputs, e.g. labels, top-level link domains, web entities, not safe for work content, etc. (the content of the image itself or sites of image circulation)Image query according to published time, date, month or year |
4. Requirements | Preparation: We recommend participants to download the tool, available here: https://github.com/jason-chao/offline-image-query/releases/ Download and install ImageSorter, available here: https://visual-computing.com/project/imagesorter/ Download this folder: https://drive.google.com/drive/folders/1iHpErBuDfmLCNc69MI6Y0YcLPTYBUyGL?usp=sharing |
→ Practical Labs I SMART Data Sprint 2021