Blografia.net

September 02, 2024

Gwolf

Free and open source software and other market failures

This post is a review for Computing Reviews for Free and open source software and other market failures , a article published in Communications of the ACM

Understanding the free and open-source software (FOSS) movement has, since its beginning, implied crossing many disciplinary boundaries. This article describes FOSS’s history, explaining its undeniable success throughout the 1990s, and why the movement today feels in a way as if it were on autopilot, lacking the “steam” it once had.

The author presents several examples of different industries where, as it happened with FOSS in computing, fundamental innovations happened not because the leading companies of each field are attentive to customers’ needs, but to a certain degree, despite them not even considering those needs, it is typically due to the hubris that comes from being a market leader.

Kemp exemplifies his hypothesis by presenting the messy landscape of the commercial, mutually incompatible systems of Unix in the 1980s. Different companies had set out to implement their particular flavor of “open Unix computers,” but with clear examples of vendor lock-in techniques. He speculates that, “if we had been able to buy a reasonably priced and solid Unix for our 32-bit PCs … nobody would be running FreeBSD or Linux today, except possibly as an obscure hobby.” He states that the FOSS movement was born out of the utter market failure of the different Unix vendors.

The focus of the article shifts then to the FOSS movement itself: 25 years ago, as FOSS systems slowly gained acceptance and then adoption in the “serious market” and at the center of the dot-com boom of the early 2000s, Linux user groups (LUGs) with tens of thousands of members bloomed throughout the world; knowing this history, why have all but a few of them vanished into oblivion?

Kemp suggests that the strength and vitality that LUGs had ultimately reflects the anger that prompted technical users to take the situation into their own hands and fix it; once the software industry was forced to change, the strongly cohesive FOSS movement diluted. “The frustrations and anger of [information technology, IT] in 2024,” Kamp writes, “are entirely different from those of 1991.” As an example, the author closes by citing the difficulty of maintaining–despite having the resources to do so–an aging legacy codebase that needs to continue working year after year.

September 02, 2024 07:08 PM

August 18, 2024

Gwolf

The social media my blog –as well as some other sites I publish in– is pushed to will soon stop receiving updates

For many years, I have been using the dlvr.it service to echo my online activity to where more people can follow it. Namely, I write in the following sources:

Via dlvr.it’s services, all those posts are “echoed” to Gwolfwolf in X (Twitter) and to the Gunnarwolfi page in Facebook. I use neither platform as a human (that is, I never log in there).

Anyway, dlvr.it sent me a mail stating they would be soon (as in, the next few weeks) cutting their free tier. And, although I value their services and am thankfulfor their value so far, I am not going to pay for my personal stuff to be reposted to social media.

So, this post’s mission is twofold:

  1. If you follow me via any of those media, you will soon not be following me anymore 😉
  2. If you know of any service that would fill the space left by dlvr.it, I will be very grateful. Extra gratefulness points if the option you suggest is able to post to accounts in less-propietary media (i.e. the Fediverse). Please tell me by mail (gwolf@gwolf.org).

Oh! Forgot to mention: Of course, my blog will continue to be appear in Planet Debian, Blografía, and any decent aggregator that consumes my RSS.

August 18, 2024 11:17 PM

August 01, 2024

Victor Martínez

Recuperación de materiales didácticos en Flash: Aprendamos Náhuatl

Y bueno este año participamos un poco menos en en el CHAT 2024 [1] y aún no he terminado de ver las ponencias que me interesan [2] y un poco por promocionar el evento tengo un par de invitaciones en UPN para explicar un tanto el uso de IA en educación, y aunque tengo colegas que han escrito e investigado más la respecto parece ser que el tema de este año en realidad ha captado la atención aunque no sea de lo que presenté.

De lo que si hice fue recuperar el proyecto que estuve haciendo el año pasado recuperando un material desarrollado en Flash, cuando platique al principio del mismo pensé que era uno aún más viejo que funcionaba con Authorware y que me pareció sencillo hacer correr, resultó que era uno que no conocía que funcionaba en Web y pues en 2023 de abril a octubre estuve haciendo lo que viene en el video y en la presentación [3].

En resumen, Adobe anunció el abandono de Flash como tecnología con mucho tiempo, en nuestro caso recuperar los contenidos era lo primero, como no se contaba con el fuente no se pudo hacer una migración Flash SWF – Animate HTML5, se encontró una opción libre y moderna, Ruffle que resuelve los problemas que llevó a Adobe a abandonar Flash, pero que crea algunos otros al no implementar TODO lo que hacia el producto original, especialmente los errores de diseño y varias practicas poco seguras.

No es una guia de como hacer, pero usando Ruffle mucho del software educativo en que ya se invirtió tiempo y que no se cuenta con los fuentes o con los recursos para re implementarlo, puede ser recuperado ya sea como recurso histórico o para extender su uso y hasta cierta medida modificado y puesto al día con bastante trabajo.

Ah, el interactivo para su consulta [4].

[1] https://chat.iztacala.unam.mx/elchat/6
[2] https://www.youtube.com/@CHATIztacala/streams
[3] http://blografia.net/vicm3/wp-content/uploads/2024/07/Recuperacion-de-material-didactico-en-flash.pdf
[4] http://linux.ajusco.upn.mx/~vicm3/nahuatl/

by vicm3 at August 01, 2024 03:16 AM

July 24, 2024

Gwolf

DebConf24 Continuous Key-Signing Party

🎉🥳🤡🎂🍥 Yay, party! 🎉🥳🤡🎂🍥

🎉🥳🤡🎂🍥 Yay, crypto! 🎉🥳🤡🎂🍥

DebCamp has started, and in a couple of days, we will fully be in DebConf24 mode!

As most of you know, an important part that binds Debian together is our cryptographic identity assurance, and that is in good measure tightened by the Continuous Key-Signing Parties we hold at DebConfs and other Debian and Free Software gatherings.

As I have done during (most of) the past DebConfs, I have prepared a set of pseudo-social maps to help you find where you are in the OpenPGP mesh of our conference. Naturally, Web-of-Trust maps should be user-centered, so find your own at:

https://people.debian.org/~gwolf/dc24_ksp/

The list is now final and it will not receive any modifications (I asked for them some days ago); if your name still appears on the list and you don’t want to be linked to the DC24 KSP in the future, tell me and I’ll remove it from future versions of the list (but it is part of the final DC24 file, as its checksum is already final)

Speaking of which!

If you are to be a part of the keysigning, get the final DC24 file and, on a device you trust, check its SHA256 by running:

$ sha256sum dc24_fprs.txt
11daadc0e435cb32f734307b091905d4236cdf82e3b84f43cde80ef1816370a5  dc24_fprs.txt

Make sure the resulting number matches the one I’m presenting. If it doesn’t, ensure your copy of the file is not corrupted (i.e. download again). If it still doess not match, notify me immediately.

Does any of the above confuse you? Please come to (or at least, follow the stream for) my session on DebConf opening day, Continuous Key-Signing Party introduction, 10:30 Korean time; I will do my best to explain the details to you.

PS- I will soon provide a simple, short PDF that will probably be mass-printed at FrontDesk so that you can easily track your KSP progress.

July 24, 2024 03:25 AM

July 17, 2024

Gwolf

Script for weather reporting in Waybar

While I was living in Argentina, we (my family) found ourselves checking for weather forecasts almost constantly — weather there can be quite unexpected, much more so that here in Mexico. So it took me a bit of tinkering to come up with a couple of simple scripts to show the weather forecast as part of my Waybar setup. I haven’t cared to share with anybody, as I believe them to be quite trivial and quite dirty.

But today, Víctor was asking for some slightly-related things, so here I go. Please do remember I warned: Dirty.

Forecast

I am using OpenWeather’s open API. I had to register to get an APPID, and it allows me for up to 1,000 API calls per day, more than plenty for my uses, even if I am logged in at my desktops at three different computers (not an uncommon situation). Having that, I set up a file named /etc/get_weather/, that currently reads:

# Home, Mexico City
LAT=19.3364
LONG=-99.1819

# # Home, Paraná, Argentina
# LAT=-31.7208
# LONG=-60.5317

# # PKNU, Busan, South Korea
# LAT=35.1339
#LONG=129.1055

APPID=SomeLongRandomStringIAmNotSharing

Then, I have a simple script, /usr/local/bin/get_weather, that fetches the current weather and the forecast, and stores them as /run/weather.json and /run/forecast.json:

#!/usr/bin/bash
CONF_FILE=/etc/get_weather

if [ -e "$CONF_FILE" ]; then
    . "$CONF_FILE"
else
    echo "Configuration file $CONF_FILE not found"
    exit 1
fi

if [ -z "$LAT" -o -z "$LONG" -o -z "$APPID" ]; then
    echo "Configuration file must declare latitude (LAT), longitude (LONG) "
    echo "and app ID (APPID)."
    exit 1
fi

CURRENT=/run/weather.json
FORECAST=/run/forecast.json

wget -q "https://api.openweathermap.org/data/2.5/weather?lat=${LAT}&lon=${LONG}&units=metric&appid=${APPID}" -O "${CURRENT}"
wget -q "https://api.openweathermap.org/data/2.5/forecast?lat=${LAT}&lon=${LONG}&units=metric&appid=${APPID}" -O "${FORECAST}"

This script is called by the corresponding systemd service unit, found at /etc/systemd/system/get_weather.service:

[Unit]
Description=Get the current weather

[Service]
Type=oneshot
ExecStart=/usr/local/bin/get_weather

And it is run every 15 minutes via the following systemd timer unit, /etc/systemd/system/get_weather.timer:

[Unit]
Description=Get the current weather every 15 minutes

[Timer]
OnCalendar=*:00/15:00
Unit=get_weather.service

[Install]
WantedBy=multi-user.target

(yes, it runs even if I’m not logged in, wasting some of my free API calls… but within reason)

Then, I declare a "custom/weather" module in the desired position of my ~/.config/waybar/waybar.config, and define it as:

"custom/weather": {
    "exec": "while true;do /home/gwolf/bin/parse_weather.rb;sleep 10; done",
"return-type": "json",
},

This script basically morphs a generic weather JSON description into another set of JSON bits that display my weather in the way I prefer to have it displayed as:

#!/usr/bin/ruby
require 'json'

Sources = {:weather => '/run/weather.json',
           :forecast => '/run/forecast.json'
          }
Icons = {'01d' => '🌞', # d → day
         '01n' => '🌃', # n → night
         '02d' => '🌤️',
         '02n' => '🌥',
         '03d' => '☁️',
         '03n' => '🌤',
         '04d'  => '☁️',
         '04n' => '🌤',
         '09d' => '🌧️',
         '10n' =>  '🌧 ',
         '10d' => '🌦️',
         '13d' => '❄️',
         '50d' => '🌫️'
        }

ret = {'text': nil, 'tooltip': nil, 'class': 'weather', 'percentage': 100}

# Current weather report: Main text of the module
begin
  weather = JSON.parse(open(Sources[:weather],'r').read)

  loc_name = weather['name']
  icon = Icons[weather['weather'][0]['icon']] || '?' + f['weather'][0]['icon'] + f['weather'][0]['main']

  temp = weather['main']['temp']
  sens = weather['main']['feels_like']
  hum = weather['main']['humidity']

  wind_vel = weather['wind']['speed']
  wind_dir = weather['wind']['deg']

  portions = {}
  portions[:loc] = loc_name
  portions[:temp] = '%s 🌡%2.2f°C (%2.2f)' % [icon, temp, sens]
  portions[:hum] = '💧 %2d%%' % hum
  portions[:wind] = '🌬%2.2fm/s %d°' % [wind_vel, wind_dir]
  ret['text'] = [:loc, :temp, :hum, :wind].map {|p| portions[p]}.join(' ')
rescue => err
  ret['text'] = 'Could not process weather file (%s ⇒ %s: %s)' % [Sources[:weather], err.class, err.to_s]
end

# Weather prevision for the following hours/days
begin
  cast = []
  forecast = JSON.parse(open(Sources[:forecast], 'r').read)
  min = ''
  max = ''

  day=Time.now.strftime('%Y.%m.%d')
  by_day = {}
  forecast['list'].each_with_index do |f,i|
    by_day[day] ||= []
    time = Time.at(f['dt'])
    time_lbl = '%02d:%02d' % [time.hour, time.min]

    icon = Icons[f['weather'][0]['icon']] || '?' + f['weather'][0]['icon'] + f['weather'][0]['main']

    by_day[day] << f['main']['temp']
    if time.hour == 0
      min = '%2.2f' % by_day[day].min
      max = '%2.2f' % by_day[day].max
      cast << '        ↑ min: <b>%s°C</b> max: <b>%s°C</b>' % [min, max]
      day = time.strftime('%Y.%m.%d')
      cast << '     ┍━━━━━┫  <b>%04d.%02d.%02d</b> ┠━━━━━┑' %
              [time.year, time.month, time.day]
    end
    cast << '%s | %2.2f°C | 🌢%2d%% | %s %s' % [time_lbl,
                                                f['main']['temp'],
                                                f['main']['humidity'],
                                                icon,
                                                f['weather'][0]['description']
                                               ]
  end
  cast << '        ↑ min: <b>%s</b>°C max: <b>%s°C</b>' % [min, max]

  ret['tooltip'] = cast.join("\n")
  
rescue => err
  ret['tooltip'] = 'Could not process forecast file (%s ⇒ %s)' % [Sources[:forecast], err.class, err.to_s]
end


# Print out the result for Waybar to process
puts ret.to_json

The end result? Nothing too stunning, but definitively something I find useful and even nicely laid out:

Screenshot

Do note that it seems OpenWeather will return the name of the closest available meteorology station with (most?) recent data — for my home, I often get Ciudad Universitaria, but sometimes Coyoacán or even San Ángel Inn.

July 17, 2024 05:32 PM

Victor Martínez

Redshift

Hace algún tiempo vi en el celular en Android lo de cambiar de temperatura de color, bien probable que en 2019 o algo así, para ayudar a el ajuste de ojos a la luz azul, cambiando por tono rojo y posiblemente ayudar un poco para no afectar el ciclo de sueño, seguro en el mundo de Windows y Mac hace tiempo también está implementado, busque en su momento en Debian y encontré Redshift [1] y Redshift-gtk [2] que hacen lo que varias aplicaciones especialmente me gustó porque toma los datos de ubicación de la red de un servicio llamado Geoclue [3], que utiliza el servicio MLS [4] de la Fundación Mozilla, este año empezó a fallar y de pronto dejó de funcionar junio de este año dejo de funcionar y revisando en los Bugs del paquete encontré reportes parecidos pero no igual y no supe cómo darle formato, así que lo voy a poner como entrada a ver si algún debianero me ayuda a darle el formato correcto para Bug.

Redshift soporta colocar manual la ubicación en la documentación viene como poner la localización manualmente y muy pequeñito dice que las longitudes en América y la latitud en el cono sur son negativas esto viene a cuento porque en las búsquedas, un par de máquinas de búsqueda reportan el absoluto, si uno no abre el documento completo y lee con atención.

¿Qué pasó?
Resulta que MLS ha ido siendo abandonado y en marzo de este año dejó de resolver peticiones [5], entonces Geoclue-2.0 dejó de funcionar y finalmente lo hizo Redshift.

Solución, colocar la latitud y longitud de manera manual, el ejemplo en la documentación viene Copenage y es muy bueno, ya en man dice:

Your current location, in degrees, given as floating point numbers, to‐
wards north and east, with negative numbers representing south and west,
respectively.

O como lo dice el sitio web más de manera más sensible para el que tiene prisa [6]

When you specify a location manually, note that a location south of equator has a negative latitude and a location west of Greenwich (e.g the Americas) has a negative longitude.

Entonces según leo, va a ser difícil que actualicen Redshift ya que no ha tenido gran cambio desde 2018, pero puede uno arreglar el asunto ya sea como en mi caso creando un archivo en .config/redshift.conf con los datos correctos, ya que vamos a obtener 19.4326° N, 99.1332° W es decir en decimal, 19.43 y -99.13 que si los entramos solo como enteros terminamos en lugar de en ciudad de México en algún lugar de Tailandia y con el reloj volteado lo cual me trajo tres días con la pantalla en un tono muy azul ya entrada la noche.

O como lo estuve haciendo a mano, cuando no entendía que me faltaba lo de los negativos, por ejemplo si está uno de viaje puede uno (sé que pudiera uno crearse varias configuraciones para las ciudades que va a visitar pero me da más fiaca).

$redshift -O 3900 # night / one time
$redshift -x # reset

redshift -l 55.7:12.6 -t 5700:3600 -g 0.8 -m randr -v # Example for Copenhagen, Denmark, from man

redshift -l 19.43:-99.13 -t 6700:3800 -m randr -v  # Mexico City, note the – sign

Entonces, que aunque supongo que lo mejor sería levantar el ticket en redshift o redshift-gtk y darle +packages o +depends geoclue-2.0 como no recuerdo bien como es y en ingles no creo que quede como esta entrada, prefiero ponerlo acá mientras veo si lo puedo sintetizar en ingles.

[1] http://jonls.dk/redshift/
[2] https://packages.debian.org/bookworm/redshift
[3] https://gitlab.freedesktop.org/geoclue/geoclue/wikis/home
[4] https://en.wikipedia.org/wiki/Mozilla_Location_Service
[5] https://discourse.mozilla.org/t/retiring-the-mozilla-location-service/128693
[6] http://jonls.dk/redshift/

by vicm3 at July 17, 2024 01:51 AM

Gwolf

Scholarly spam • «Wulfenia»

I just got one of those utterly funny spam messages… And yes, I recognize everybody likes building a name for themselves. But some spammers are downright silly.

I just got the following mail:

From: Hermine Wolf <hwolf850@gmail.com>
To: me, obviously 😉
Date: Mon, 15 Jul 2024 22:18:58 -0700
Subject: Make sure that your manuscript gets indexed and showcased in the prestigious Scopus database soon.
Message-ID: <CAEZZb3XCXSc_YOeR7KtnoSK4i3OhD=FH7u+A5xSMsYvhQZojQA@mail.gmail.com>

This message has visual elements included. If they don't display, please   
update your email preferences.

*Dear Esteemed Author,*


Upon careful examination of your recent research articles available online,
we are excited to invite you to submit your latest work to our esteemed    
journal, '*WULFENIA*'. Renowned for upholding high standards of excellence 
in peer-reviewed academic research spanning various fields, our journal is 
committed to promoting innovative ideas and driving advancements in        
theoretical and applied sciences, engineering, natural sciences, and social
sciences. 'WULFENIA' takes pride in its impressive 5-year impact factor of 
*1.000* and is highly respected in prestigious databases including the     
Science Citation Index Expanded (ISI Thomson Reuters), Index Copernicus,   
Elsevier BIOBASE, and BIOSIS Previews.                                     
                                                                           
*Wulfenia submission page:*                                                
[image: research--check.png][image: scrutiny-table-chat.png][image:        
exchange-check.png][image: interaction.png]                                
.                                                                          

Please don't reply to this email                                           
                                                                           
We sincerely value your consideration of 'WULFENIA' as a platform to       
present your scholarly work. We eagerly anticipate receiving your valuable 
contributions.                                                             

*Best regards,*                                                            
Professor Dr. Vienna S. Franz                                              

Scholarly spam

Who cares what Wulfenia is about? It’s about you, my stupid Wolf cousin!

July 17, 2024 12:23 AM

June 26, 2024

Gwolf

Many terabytes for students to play with. Thanks Debian!

LIDSOL students receiving their new hard drives

My students at LIDSOL (Laboratorio de Investigación y Desarrollo de Software Libre, Free Software Research and Development Lab) at Facultad de Ingeniería, UNAM asked me to help them get the needed hardware to set up a mirror for various free software projects. We have some decent servers (not new servers, but mirrors don’t require to be top-performance), so…

A couple of weeks ago, I approached the Debian Project Leader (DPL) and suggested we should buy two 16TBhard drives for this project, as it is the most reasonable cost per byte point I found. He agreed, and I bought the drives. Today we had a lab meeting, and I handed them over the hardware.

Of course, they are very happy and thankful with the Debian project 😃 In the last couple of weeks, they have already set up an Archlinux mirror (https://archlinux.org/mirrors/fi-b.unam.mx), and now that they have heaps of storage space, plans are underway to set up various other mirrors (of course, a Debian mirror will be among the first).

June 26, 2024 02:32 AM

June 25, 2024

Gwolf

Find my device - Whether you like it or not

I received a mail today from Google (noreply-findmydevice@google.com) notifying me that they would unconditionally enable the Find my device functionality I have been repeatedly marking as unwanted in my Android phone.

The mail goes on to explain this functionality works even when the device is disconnected, by Bluetooth signals (aha, so “turn off Bluetooth” will no longer turn off Bluetooth? Hmmm…)

Of course, the mail hand-waves that only I can know the location of my device. «Google cannot see or use it for other ends». First, should we trust this blanket statement? Second, the fact they don’t do it now… means they won’t ever? Not even if law enforcement requires them to? The devices will be generating this information whether we want it or not, so… it’s just a matter of opening the required window.

Targetting an individual in a crowd

Of course, it is a feature many people will appreciate and find useful. And it’s not only for finding lost (or stolen) phones, but the mail also mentions tags can be made available to store in your wallet, bike, keys or whatever. But it should be opt-in. As it is, it seems it’s not even to opt out of it.

June 25, 2024 05:11 PM

June 21, 2024

Gwolf

A new RISC-V toy... requiring almost no tinkering

Shortly before coming back from Argentina, I got news of a very interesting set of little machines, the MilkV Duo. The specs looked really interesting and fun to play with, particularly those of the “bigger” model, Milk-V DUO S Some of the highlights:

Milk-V Duo S

  • The SG2000 SoC is a Dual-architecture beast. A hardware switch controls whether the CPU is an ARM or a RISC-V.
    • Not only that: It has a second (albeit lesser) RISC-V core that can run independently. They mention this computer can run simultaneously Linux and FreeRTOS!
  • 512MB RAM
  • Sweet form factor (4.2×4.2cm)
  • Peeking around their Web site, it is one of the most open and well documented computers in their hardware range.

Naturally, for close to only US$12 (plus shipping) for the configuration I wanted… I bought one, and got it delivered in early May. The little box sat on my desk for close to six weeks until I had time to start tinkering with it…

I must say I am surprised. Not only the little bugger delivers what it promises, but it is way more mature than what I expected: It can be used right away without much tinkering with! I mean, I have played with it for less than an hour by now, and I’ve even managed to get (almost) regular Debian working.

Milk-V distributes a simple, 58.9MB compressed Linux image, based on Buildroot, a simple Linux image generator mostly used for embedded applications, as well as its source tree. I thought that would be a good starting point to work on setting up a minimal Debian filesystem, as I did with the CuBox-i4Pro ten years ago, and maybe even to grow towards a more official solution, akin to what we currently have for the Raspberry Pi family

…Until I discovered what looks like a friendly and very active online community of Milk-V users! I haven’t yet engaged in it, but I stumbled across a thread announcing the availability of Debian images for the Milk-V family.

And yes, it feels like a very normal Debian system. /etc/apt/sources.list does point to a third-party repository, but it’s for only four packages, all related to pinmux controlfor CVITEK chips. It does feel like a completely normal Debian system! It is not as snappy and fast to load as Buildroot, but given Debian’s generality, that’s completely as expected. Even the wireless network, one of the usual pain points, works just out of the box! The Debian images can be built or downloaded from this Git repository.

In case you wonder how is this system booting or what hardware it detects, I captured two boot logs:

June 21, 2024 05:59 AM

May 25, 2024

Gwolf

How computers make books • from graphics rendering, search algorithms, and functional programming to indexing and typesetting

This post is a review for Computing Reviews for How computers make books • from graphics rendering, search algorithms, and functional programming to indexing and typesetting , a book published in Manning

If we look at the age-old process of creating books, how many different areas can a computer help us with? And how can each of them be used to teach computer science (CS) fundamentals to a nontechnical audience? This is the premise of John Whitington’s enticing book and the result is quite amazing.

The book immediately drew my attention when looking at the titles available for review. After all, my initiation into computing as a kid was learning the LaTeX typesetting system while my father worked on his first book on scientific language and typography [1]. Whitington picks 11 different technical aspects of book production, from how dots of ink are transferred to a white page and how they are made into controllable, recognizable shapes, all the way to forming beautiful typefaces and the nuances of properly addressing white-space to present aesthetically pleasing paragraphs, building it all into specific formats aimed at different ends.

But if we dig beyond just the chapter titles, we will find a very interesting book on CS that, without ever using technical language or notation, presents aspects as varied as anti-aliasing, vector and raster images, character sets such as ASCII and Unicode, an introduction to programming, input methods for different writing systems, efficient encoding (compression) methods, both for text and images, lossless and lossy, and recursion and dithering methods. To my absolute surprise, while the author thankfully spared the reader the syntax usually associated with LISP-related languages, the programming examples clearly stem from the LISP school, presenting solutions based on tail recursion. Of course, it is no match for Donald Knuth’s classic book on this same topic [2], but could very well be a primer for readers to approach it.

The book is light and easy to read, and keeps a very informal, nontechnical tone throughout. My only complaint relates to reading it in PDF format; the topic of this book, and the care with which the images were provided by the author, warrant high resolution. The included images are not only decorative but an integral part of the book. Maybe this is specific to my review copy, but all of the raster images were in very low resolution.

This book is quite different from what readers may usually expect, as it introduces several significant topics in the field. CS professors will enjoy it, of course, but also readers with a humanities background, students new to the field, or even those who are just interested in learning a bit more.

References

  1. Sánchez y Gándara, A.; Magariños Lamas, F.; Wolf, K. B., Manual de lenguaje y tipografía científica en castellano. Trillas, Mexico City, Mexico, 1986, https://www.fis.unam.mx/~bwolf/manual.html

  2. Knuth, D. E. Digital typographyCSLI Lecture Notes: CSLI Lecture Notes. CSLI Publications, Stanford, CA, 1999, https://www-cs-faculty.stanford.edu/~knuth/dt.html

May 25, 2024 12:11 AM

May 09, 2024

Gwolf

Hacks, leaks, and revelations • The art of analyzing hacked and leaked data

This post is a review for Computing Reviews for Hacks, leaks and revelations • The art of analyzing hacked and leaked data , a book published in No Starch Press

Imagine you’ve come across a trove of files documenting a serious deed and you feel the need to “blow the whistle.” Or maybe you are an investigative journalist and this whistleblower trusts you and wants to give you said data. Or maybe you are a technical person, trusted by said journalist to help them do things right–not only to help them avoid being exposed while leaking the information, but also to assist them in analyzing the contents of the dataset. This book will be a great aid for all of the above tasks.

The author, Micah Lee, is both a journalist and a computer security engineer. The book is written entirely from his experience handling important datasets, and is organized in a very logical and sound way. Lee organized the 14 chapters in five parts. The first part–the most vital to transmitting the book’s message, in my opinion–begins by talking about the care that must be taken when handling a sensitive dataset: how to store it, how to communicate it to others, sometimes even what to redact (exclude) so the information retains its strength but does not endanger others (or yourself). The first two chapters introduce several tools for encrypting information and keeping communication anonymous, not getting too deep into details and keeping it aimed at a mostly nontechnical audience.

Something that really sets this book apart from others like it is that Lee’s aim is not only to tell stories about the “hacks and leaks” he has worked with, or to present the technical details on how he analyzed them, but to teach readers how to do the work. From Part 2 onward the book adopts a tutorial style, teaching the reader numerous tools for obtaining and digging information out of huge and very timely datasets. Lee guides the reader through various data breaches, all of them leaked within the last five years: BlueLeaks, Oath Keepers email dumps, Heritage Foundation, Parler, Epik, and Cadence Health. He guides us through a tutorial on using the command line (mostly targeted at Linux, but considering MacOS and Windows as well), running Docker containers, learning the basics of Python, parsing and filtering structured data, writing small web applications for getting at the right bits of data, and working with structured query language (SQL) databases.

The book does an excellent job of fulfilling its very ambitious aims, and this is even more impressive given the wide range of professional profiles it is written for; that being said, I do have a couple critiques. First, the book is ideologically loaded: the datasets all exhibit the alt-right movement that has gained strength in the last decade. Lee takes the reader through many instances of COVID deniers, rioters for Donald Trump during the January 2021 attempted coup, attacks against Black Lives Matter activists, and other extremism research; thus this book could alienate right-wing researchers, who might also be involved in handling important whistleblowing cases.

Second, given the breadth of the topic and my 30-plus years of programming experience, I was very interested in the first part of each chapter but less so in the tutorial part. I suppose a journalist reading through the same text might find the sections about the importance of data handling and source protection to be similarly introductory. This is unavoidable, of course, given the nature of this work. However, while Micah Lee is an excellent example of a journalist with the appropriate technical know-how to process the types of material he presents as examples, expecting any one person to become a professional in both fields is asking too much.

All in all, this book is excellent. The writing style is informal and easy to read, the examples are engaging, and the analysis is very good. It will certainly teach you something, no matter your background, and it might very well complement your professional skills.

May 09, 2024 04:24 AM

April 25, 2024

Victor Martínez

¿Polemica o tal vez no?

Recien vi una polémica porque por ahí se dijo, no se bien en que medio, pero buscando encontre el podcast de Índigo Geek

Que al parecer es este, como no me dejó comentar, pongo aquí el mapa de contenido.

1:10 Jośe Saucedo, bienvenida
1:51 Inicio, no se presentan, lo harán en el minuto 43 con sus redes de contacto
2:21 Rodrigo Chavez, lider de marketing de CCXP
3:21 que es la CCXP 3:50 no queda claro
4:05 Mucho tiempo…
4:22 Cuatro años de planeación…
4:49 Publico mexicano esperando la comicon
5:00 Uds están siendo pioneros…
5:16 Eventos anteriores
6:21 Javier Ibarreche, los urigañis del norte…
6:21 a 8:00 invitados confirmados
8:26 Dos por uno de ocesa
8:38 Todo mundo quiere una ComicCon pero no esta dispuesto a pagar por ello
9:50 Primer año,
22:46 fin de bloque
23:05 Reseña de juego: Outward Definitive Edition
26:22 a 42:50 Segundo bloque, discusión
43:00 Medios de contacto @indogeekmx en todos lados @accres94 @c_bits
43:43 Creditos

Lo que es polémico es tal vez la mención al minuto cinco, lo más curioso es que al final vienen los créditos y al inicio se presenta quien lleva la batuta José Saucedo, quien tiene una larga experiencia en el medio…

Una cosa que les doy lata en la Cobacha es que mencionen quien habla y como contactarlo al menos al inicio y al final, este capítulo es una muestra de porqué se necesita hacer, no sé muy bien quien integra el panel salvo José Saucedo y el invitado Rodrigo Chavez lider de marketing de CCPX, rascandole parece que quien hace la intervención es Axel Amezquita de Reporte Indigo/IGN/Plubimetro/Badgame (https://www.reporteindigo.com/author/axel-amezquita/)

Yo me enteré por esta publicación:

En fin que hasta yendo a la wikipedia a buscar editorial vid uno puede leer lo siguiente:

«Es de destacar que grandes personalidades y autores de la Industria del Cómic y de la Ciencia Ficción se dieron cita en ellas, incluyendo a Dennis O´Neil, entonces el Editor de los títulos de Batman para DC Comics, y dibujantes y escritores como Dan Jurgens (La Muerte de Superman), Jon Bogdanove, Louise Simonson o Todd McFarlane (creador de Spawn).»

Enlace a la versión consultada https://es.wikipedia.org/w/index.php?title=Grupo_Editorial_Vid&oldid=157274740

Pero se me ocurren muchos más, en la CONQUE, leyendo en este mismo blog en Utopia2003 [1], Mole y más, ahora si decimos que panel como los de ComicCon, tampoco es tan cierto en Conque, en la RocaPoca, en la feria de la historieta (al igual que algunas pifias como la saga de hades, cuando era BTx lo que vendían circa 1998) [1.5] y otros eventos ha habido primicias, en el IPN han habido excelentes conferencias [2], en el Colmex  de la cual hay video [3] y por supuesto la propia reseña de la Mecyf [4], Conque [5], Mole 2000 [6]

[1] https://animeproject.org/2003/12/utopia-2003/

[1.5] https://animeproject.org/ap/hades.htm

[2] https://animeproject.org/2016/02/la-gran-revolucion-cultural-invisible-que-fueron-los-anos-80-por-eiji-otsuka/

[3] https://animeproject.org/2004/03/dos-mulas-japonesas-entre-burros-blancos/

[4] https://blografia.net/vicm3/1998/05/la-mecyf/

[5] https://animeproject.org/ap/conque2001.htm y https://animeproject.org/ap/conque99.htm

[6] https://animeproject.org/ap/mole2000.htm

by vicm3 at April 25, 2024 12:13 AM

April 09, 2024

Gwolf

Think outside the box • Welcome Eclipse!

Now that we are back from our six month period in Argentina, we decided to adopt a kitten, to bring more diversity into our lives. Perhaps this little girl will teach us to think outside the box!

Yesterday we witnessed a solar eclipse — Mexico City was not in the totality range (we reached ~80%), but it was a great experience to go with the kids. A couple dozen thousand people gathered for a massive picnic in las islas, the main area inside our university campus.

Afterwards, we went briefly back home, then crossed the city to fetch the little kitten. Of course, the kids were unanimous: Her name is Eclipse.

April 09, 2024 04:38 PM

March 18, 2024

Gwolf

After miniDebConf Santa Fe

Last week we held our promised miniDebConf in Santa Fe City, Santa Fe province, Argentina — just across the river from Paraná, where I have spent almost six beautiful months I will never forget.

Around 500 Kilometers North from Buenos Aires, Santa Fe and Paraná are separated by the beautiful and majestic Paraná river, which flows from Brazil, marks the Eastern border of Paraguay, and continues within Argentina as the heart of the litoral region of the country, until it merges with the Uruguay river (you guessed right — the river marking the Eastern border of Argentina, first with Brazil and then with Uruguay), and they become the Río de la Plata.

This was a short miniDebConf: we were lent the APUL union’s building for the weekend (thank you very much!); during Saturday, we had a cycle of talks, and on sunday we had more of a hacklab logic, having some unstructured time to work each on their own projects, and to talk and have a good time together.

We were five Debian people attending: {santiago|debacle|eamanu|dererk|gwolf}@debian.org. My main contact to kickstart organization was Martín Bayo. Martín was for many years the leader of the Technical Degree on Free Software at Universidad Nacional del Litoral, where I was also a teacher for several years. Together with Leo Martínez, also a teacher at the tecnicatura, they contacted us with Guillermo and Gabriela, from the APUL non-teaching-staff union of said university.

We had the following set of talks (for which there is a promise to get electronic record, as APUL was kind enough to record them! of course, I will push them to our usual conference video archiving service as soon as I get them)

Hour Title (Spanish) Title (English) Presented by
10:00-10:25 Introducción al Software Libre Introduction to Free Software Martín Bayo
10:30-10:55 Debian y su comunidad Debian and its community Emanuel Arias
11:00-11:25 ¿Por qué sigo contribuyendo a Debian después de 20 años? Why am I still contributing to Debian after 20 years? Santiago Ruano
11:30-11:55 Mi identidad y el proyecto Debian: ¿Qué es el llavero OpenPGP y por qué? My identity and the Debian project: What is the OpenPGP keyring and why? Gunnar Wolf
12:00-13:00 Explorando las masculinidades en el contexto del Software Libre Exploring masculinities in the context of Free Software Gora Ortiz Fuentes - José Francisco Ferro
13:00-14:30 Lunch    
14:30-14:55 Debian para el día a día Debian for our every day Leonardo Martínez
15:00-15:25 Debian en las Raspberry Pi Debian in the Raspberry Pi Gunnar Wolf
15:30-15:55 Device Trees Device Trees Lisandro Damián Nicanor Perez Meyer (videoconferencia)
16:00-16:25 Python en Debian Python in Debian Emmanuel Arias
16:30-16:55 Debian y XMPP en la medición de viento para la energía eólica Debian and XMPP for wind measuring for eolic energy Martin Borgert

As it always happens… DebConf, miniDebConf and other Debian-related activities are always fun, always productive, always a great opportunity to meet again our decades-long friends. Lets see what comes next!

March 18, 2024 04:00 AM

March 14, 2024

Victor Martínez

Blogs en seminario especializado

No hace mucho un día que tuvimos asamblea y luego movilización me pidieron apoyar con una clase de maestría, con un tema que he tratado durante bastante tiempo y mi recomendación fue un tanto aprovechar la cámara del teléfono para añadir un toque personal a sus entradas en lugar de imágenes prediseñadas.

Ese día no había red, así que una hora y cuarto de datos con el cel en teams me sorprendió lo bien que van los estudiantes (algunos que ya había escuchado en el seminario de la línea).

Mis notas para la sesión

Mi escritorio

by vicm3 at March 14, 2024 11:17 PM

March 13, 2024

Victor Martínez

Porque seguir

Esta entrada ya existe con todo y foto se llama mañana tal vez.

Y la puse en 2016, hoy me parece tan importante como entonces

«Mañana tal vez tenga que sentarme frente a mis hijos y decirles que fuimos derrotados, que no supimos como hacer para ganar. Pero no podría mirarlos a los ojos y decirles que ellos viven así. Porque yo no me anime a luchar.»

Y usted perdone, pero hay muchas cosas que no se arreglan solo pidiendolas de buen modo… hay ocasiones que hay que incomodar y moverse y mover a los demás para poder resolver lo que no tendria porque estar mal en primer lugar.

Tambien lo pensé recien… hay quien dedica la vida a apagar incendios y no se lleva ni un gracias y son los bomberos que son voluntarios… y pensando en mi propia historia, casi que soy bombero y voluntario, aunque no este en una estación…

 

by vicm3 at March 13, 2024 03:23 AM

Mas consignas

Alarma, alarma,

De este a oeste, de norte a sur ganaremos esta lucha, cueste lo que cueste

No que no, si que si ya volvimos a salir…

No somos uno, no somos diez, autoridad cuéntanos bien

Sectores unidos, jamás serán vencidos

Estudiante (docente|trabajador|ciudadano|miron) consciente se une al contingente

Ese apoyo si se ve!

PPSA, PPSA, EI EA, Educar para transformar, educar para liberar, !Pedagogica Nacional! (que recién cambiaron pero me gusta más esta porque ya no tenemos EA, Educación de Jóvenes y Adultos)

by vicm3 at March 13, 2024 03:16 AM

March 07, 2024

Gwolf

Constructed truths — truth and knowledge in a post-truth world

This post is a review for Computing Reviews for Constructed truths — truth and knowledge in a post-truth world , a book published in Springer Link

Many of us grew up used to having some news sources we could implicitly trust, such as well-positioned newspapers and radio or TV news programs. We knew they would only hire responsible journalists rather than risk diluting public trust and losing their brand’s value. However, with the advent of the Internet and social media, we are witnessing what has been termed the “post-truth” phenomenon. The undeniable freedom that horizontal communication has given us automatically brings with it the emergence of filter bubbles and echo chambers, and truth seems to become a group belief.

Contrary to my original expectations, the core topic of the book is not about how current-day media brings about post-truth mindsets. Instead it goes into a much deeper philosophical debate: What is truth? Does truth exist by itself, objectively, or is it a social construct? If activists with different political leanings debate a given subject, is it even possible for them to understand the same points for debate, or do they truly experience parallel realities?

The author wrote this book clearly prompted by the unprecedented events that took place in 2020, as the COVID-19 crisis forced humanity into isolation and online communication. Donald Trump is explicitly and repeatedly presented throughout the book as an example of an actor that took advantage of the distortions caused by post-truth.

The first chapter frames the narrative from the perspective of information flow over the last several decades, on how the emergence of horizontal, uncensored communication free of editorial oversight started empowering the “netizens” and created a temporary information flow utopia. But soon afterwards, “algorithmic gatekeepers” started appearing, creating a set of personalized distortions on reality; users started getting news aligned to what they already showed interest in. This led to an increase in polarization and the growth of narrative-framing-specific communities that served as echo chambers for disjoint views on reality. This led to the growth of conspiracy theories and, necessarily, to the science denial and pseudoscience that reached unimaginable peaks during the COVID-19 crisis. Finally, when readers decide based on completely subjective criteria whether a scientific theory such as global warming is true or propaganda, or question what most traditional news outlets present as facts, we face the phenomenon known as “fake news.” Fake news leads to “post-truth,” a state where it is impossible to distinguish between truth and falsehood, and serves only a rhetorical function, making rational discourse impossible.

Toward the end of the first chapter, the tone of writing quickly turns away from describing developments in the spread of news and facts over the last decades and quickly goes deep into philosophy, into the very thorny subject pursued by said discipline for millennia: How can “truth” be defined? Can different perspectives bring about different truth values for any given idea? Does truth depend on the observer, on their knowledge of facts, on their moral compass or in their honest opinions?

Zoglauer dives into epistemology, following various thinkers’ ideas on what can be understood as truth: constructivism (whether knowledge and truth values can be learnt by an individual building from their personal experience), objectivity (whether experiences, and thus truth, are universal, or whether they are naturally individual), and whether we can proclaim something to be true when it corresponds to reality. For the final chapter, he dives into the role information and knowledge play in assigning and understanding truth value, as well as the value of second-hand knowledge: Do we really “own” knowledge because we can look up facts online (even if we carefully check the sources)? Can I, without any medical training, diagnose a sickness and treatment by honestly and carefully looking up its symptoms in medical databases?

Wrapping up, while I very much enjoyed reading this book, I must confess it is completely different from what I expected. This book digs much more into the abstract than into information flow in modern society, or the impact on early 2020s politics as its editorial description suggests. At 160 pages, the book is not a heavy read, and Zoglauer’s writing style is easy to follow, even across the potentially very deep topics it presents. Its main readership is not necessarily computing practitioners or academics. However, for people trying to better understand epistemology through its expressions in the modern world, it will be a very worthy read.

March 07, 2024 01:08 AM

February 23, 2024

Gwolf

10 things software developers should learn about learning

This post is a review for Computing Reviews for 10 things software developers should learn about learning , a article published in Communications of the ACM

As software developers, we understand the detailed workings of the different components of our computer systems. And–probably due to how computers were presented since their appearance as “digital brains” in the 1940s–we sometimes believe we can transpose that knowledge to how our biological brains work, be it as learners or as problem solvers. This article aims at making the reader understand several mechanisms related to how learning and problem solving actually work in our brains. It focuses on helping expert developers convey knowledge to new learners, as well as learners who need to get up to speed and “start coding.” The article’s narrative revolves around software developers, but much of what it presents can be applied to different problem domains.

The article takes this mission through ten points, with roughly the same space given to each of them, starting with wrong assumptions many people have about the similarities between computers and our brains. The first section, “Human Memory Is Not Made of Bits,” explains the brain processes of remembering as a way of strengthening the force of a memory (“reconsolidation”) and the role of activation in related network pathways. The second section, “Human Memory Is Composed of One Limited and One Unlimited System,” goes on to explain the organization of memories in the brain between long-term memory (functionally limitless, permanent storage) and working memory (storing little amounts of information used for solving a problem at hand). However, the focus soon shifts to how experience in knowledge leads to different ways of using the same concepts, the importance of going from abstract to concrete knowledge applications and back, and the role of skills repetition over time.

Toward the end of the article, the focus shifts from the mechanical act of learning to expertise. Section 6, “The Internet Has Not Made Learning Obsolete,” emphasizes that problem solving is not just putting together the pieces of a puzzle; searching online for solutions to a problem does not activate the neural pathways that would get fired up otherwise. The final sections tackle the differences that expertise brings to play when teaching or training a newcomer: the same tools that help the beginner’s productivity as “training wheels” will often hamper the expert user’s as their knowledge has become automated.

The article is written with a very informal and easy-to-read tone and vocabulary, and brings forward several issues that might seem like commonsense but do ring bells when it comes to my own experiences both as a software developer and as a teacher. The article closes by suggesting several books that further expand on the issues it brings forward. While I could not identify a single focus or thesis with which to characterize this article, the several points it makes will likely help readers better understand (and bring forward to consciousness) mental processes often taken for granted, and consider often-overlooked aspects when transmitting knowledge to newcomers.

February 23, 2024 01:56 AM