Blografia.net
Last update: November 20, 2024 10:00 PM
November 11, 2024
Gwolf
Why academics under-share research data - A social relational theory
This post is a review for Computing Reviews for Why academics under-share research data - A social relational theory , a article published in Journal of the Association for Information Science and Technology
As an academic, I have cheered for and welcomed the open access (OA) mandates that, slowly but steadily, have been accepted in one way or another throughout academia. It is now often accepted that public funds means public research. Many of our universities or funding bodies will demand that, with varying intensities–sometimes they demand research to be published in an OA venue, sometimes a mandate will only “prefer” it. Lately, some journals and funder bodies have expanded this mandate toward open science, requiring not only research outputs (that is, articles and books) to be published openly but for the data backing the results to be made public as well. As a person who has been involved with free software promotion since the mid 1990s, it was natural for me to join the OA movement and to celebrate when various universities adopt such mandates.
Now, what happens after a university or funder body adopts such a mandate? Many individual academics cheer, as it is the “right thing to do.” However, the authors observe that this is not really followed thoroughly by academics. What can be observed, rather, is the slow pace or “feet dragging” of academics when they are compelled to comply with OA mandates, or even an outright refusal to do so. If OA and open science are close to the ethos of academia, why aren’t more academics enthusiastically sharing the data used for their research? This paper finds a subversive practice embodied in the refusal to comply with such mandates, and explores an hypothesis based on Karl Marx’s productive worker theory and Pierre Bourdieu’s ideas of symbolic capital.
The paper explains that academics, as productive workers, become targets for exploitation: given that it’s not only the academics’ sharing ethos, but private industry’s push for data collection and industry-aligned research, they adapt to technological changes and jump through all kinds of hurdles to create more products, in a result that can be understood as a neoliberal productivity measurement strategy. Neoliberalism assumes that mechanisms that produce more profit for academic institutions will result in better research; it also leads to the disempowerment of academics as a class, although they are rewarded as individuals due to the specific value they produce.
The authors continue by explaining how open science mandates seem to ignore the historical ways of collaboration in different scientific fields, and exploring different angles of how and why data can be seen as “under-shared,” failing to comply with different aspects of said mandates. This paper, built on the social sciences tradition, is clearly a controversial work that can spark interesting discussions. While it does not specifically touch on computing, it is relevant to Computing Reviews readers due to the relatively high percentage of academics among us.
November 09, 2024
Victor Martínez
Eureka
Cual dice arriba, lo encontré, pero me estoy adelantando, como escribí anteriormente, dejó de funcionar el agregador de blografia y aunque lo pensé como proyecto de fin de semana, más bien le fui dedicando por ahí una tarde de sábado, varias noches y una mañana de jueves…
Teniendo el antecedente de Planetalinux [1] que escribió David[2] inspirado en PlanetVenus [3] o para palear la limitación de este mismo, me fui a buscar este que debía ser menos viejo que el PlanetPlanet [4] que estaba corriendo y en el que están inspirados… si y no, si era más nuevo pero tampoco funciona con Python3 al menos no sin hacer cambios grandes.
Buscando pensé en quien más podría estar usando PlanetPlanet y por supuesto Debian es uno de ellos, no esta como paquete y en salsa no esta el código, comencé a ver si se podía convertir de 2 a 3 y resulta que si, existe 2to3 con el cual empece a hacer pruebas, pero me seguían faltando algunas cosas y tenia errores que no conozco, ya entrado en esto me pregunte sobre otros planetas y encontré la abandonada página de Wikipedia [5].
Revisando que hay por ahí una versión en PHP y otra en Ruby la primera que no me funcionó y la segunda que no probé, en eso estaba cuando di con el PlanetPython[6] y resulta que revisando su repositorio en Github[7] usa PlanetPlanet, y tienen una rama en Python3 (py3) que supongo es la que genera su página, clone su repositorio, me cambie a su rama 3 y me encontré que el marcado para el template había cambiado, tome el que están usando por defecto, solo tuve que cambiar un valor en mi configuración, borre todo mi .local para ver si funcionaba en mi entorno y necesite instalar con pip feedparser y finalmente tuve de regreso PlanetPlanet corriendo en Python3, seguro en un par de semanas más de estar dando a los errores que me quedaban por resolver habría obtenido más o menos lo mismo y leyendo los logs del repositorio más o menos el mismo camino siguieron en su momento, primero 2to3 y luego revisar dependencias, alguna sintaxis, entonces voy a investigar algunas que no pude resolver en su momento para ver como le hicieron.
Y bueno espero tener agregador para rato y supongo que podría hacer mi propio fork, pero no he hecho realmente ningún cambio al código y cree mi propia sección de configuración que no necesita estar en control de versiones al menos no en uno publico, así que estamos de vuelta.
[1] https://damog.net/blog/2019/01/12/adi%C3%B3s-sitio-de-planeta-linux/
[2] https://damog.net/blog/
[3] https://www.intertwingly.net/code/venus/
[4] https://web.archive.org/web/20170324212158/http://www.planetplanet.org/
[5] https://en.wikipedia.org/wiki/Planet_(software)
[6] https://planetpython.org/
[7] https://github.com/python/planet
October 31, 2024
Gwolf
Do you have a minute..?
…to talk about the so-called “Intellectual Property”?
October 30, 2024
Victor Martínez
De lectores… de rss
Hubo una época en que todos usábamos el botón de agregar RSS y los blogs reinaban, allá por 2003, me supongo, que sé que mucho antes hay, pero no estoy bien seguro si en 2003 o 4 fue el boom al menos en los de habla hispana, leo que en 2005 se lanza Google Reader, que no recuerdo cuando empecé a usar pero que me gustaba machismo y estuve muy triste y molesto cuando lo descontinuaron en 2013, al igual que un montón de productos que ha descontinuado sin opción, sin liberar el código o de plano que ha comprado para cerrar la compañía de California, en fin que estuve un rato usando una aplicación de Windows que no tenía toda la funcionalidad de la de Google
Por 2009? tuve la gran falla en mi PSU y me pasé a usar Debian en mi computadora de casa, en fin que tuve que encontrar un reemplazo para el lector probé más de un par pero el que pude instalar y mantener en mi servicio de hosting fue Tiny Tiny RSS [1], el cual desde hace buen rato corro, le cree una cuenta a Nodens y otra a Map, pero no dudo que lo hayan abandonado también, ahora que nuestro agregador se descompuso, aproveche para actualizar con git la versión que tengo en mi lector y darle una revisada y me da gusto que funciona muy bien y que mantiene mi lista de lectura, lo que me da mucha tristeza es recordar con todas las entradas en rojo los blogs y sitios de noticias que desaparecieron, dejaron de publicar, se hicieron de paga, quebraron, fueron abandonados o sus creadores ya no se encuentran con nosotros, en fin que Gunnar bien me ha ofrecido una opción que encontró en PlanetDebian [2], y hasta donde recuerdo este mismo corre en PlanetVenus [3] que es fork del PlanetPlanet que produce la portada de Blografia, no dudo que corra en una instancia de oldstable/security/LTS en donde las dependencias de Python 2.x no están rotas o en su propia jaula o VM, voy a consultar, porque revisando en Salsa [4} no veo que hayan hecho modificaciones al código o no supe por donde buscar.
En fin que ha sido un clavado en la nostalgia y ha sido interesante leer varias entradas de blogs que han desaparecido de la red por completo.
PlanetVenus medio he logrado que avance un poco en sus dependencias sin tener que re escribirlo, haciendo cambio a la rama dev que es tan sólo 7 años atrás en lugar de la principal que lleva 14 años sin moverse… le decía Gunnar que le dedicaría el fin de semana y en realidad solo le he dedicado un par de horas, espero dedicarle una tarde del fin de semana, si no queda su sugerencia FreshRSS [5] no me gusta del todo pero ofrece una opción de portada pública que no tiene tt-rss o que no fue pensado para eso al generar una aleatoria que un vínculo suave supongo podría solucionar…si no me encuentro un programa más ligero que haga lo mismo en el camino…
[1] https://tt-rss.org/
[2] https://planet.debian.org/
[3] https://github.com/rubys/venus
[4] https://salsa.debian.org/planet-team
[5] https://freshrss.org/index.html
October 22, 2024
Victor Martínez
Y nuestro agregador
Dejó de funcionar, justo cuando estaba por escribir sobre otra cosa, note que la portada no se esta actualizando y resulta que estoy usando planetplanet con algunas modificaciones de planetvenus que es un fork, con miras a migrar al último, hace relativamente poco DH migró al LTS más reciente de Ubuntu… y actualizó Python dejando sin soporte los dos programas que estaba usando he dedicado unas 6 horas en dos días a ver si puedo hacer funcionar alguno de estos dos con Python3 y ahí más o menos la llevo con planetvenus… pero no termina de quedar… y no hace mucho escribía Gunnar de que aparecerián por acá sus actualizaciones, de hecho por eso me percate que estaba roto, porque vi una entrada en su blog que no estaba en la portada, al parecer hasta el fin de semana le voy a poder dedicar tiempo.
October 11, 2024
Victor Martínez
Hoy por la mañana
Ardillas, Burros, Pumas y Lobos Grises.
October 10, 2024
Gwolf
Started a guide to writing FUSE filesystems in Python
As DebConf22 was coming to an end, in Kosovo, talking with Eeveelweezel they invited me to prepare a talk to give for the Chicago Python User Group. I replied that I’m not really that much of a Python guy… But would think about a topic. Two years passed. I meet Eeveelweezel again for DebConf24 in Busan, South Korea. And the topic came up again. I had thought of some ideas, but none really pleased me. Again, I do write some Python when needed, and I teach using Python, as it’s the language I find my students can best cope with. But delivering a talk to ChiPy?
On the other hand, I have long used a very simplistic and limited filesystem I’ve designed as an implementation project at class: FIUnamFS (for “Facultad de Ingeniería, Universidad Nacional Autónoma de México”: the Engineering Faculty for Mexico’s National University, where I teach. Sorry, the link is in Spanish — but you will find several implementations of it from the students 😉). It is a toy filesystem, with as many bad characteristics you can think of, but easy to specify and implement. It is based on contiguous file allocation, has no support for sub-directories, and is often limited to the size of a 1.44MB floppy disk.
As I give this filesystem as a project to my students (and not as a mere homework), I always ask them to try and provide a good, polished, professional interface, not just the simplistic menu I often get. And I tell them the best possible interface would be if they provide support for FIUnamFS transparently, usable by the user without thinking too much about it. With high probability, that would mean: Use FUSE.
But, in the six semesters I’ve used this project (with 30-40 students per semester group), only one student has bitten the bullet and presented a FUSE implementation.
Maybe this is because it’s not easy to understand how to build a FUSE-based
filesystem from a high-level language such as Python? Yes, I’ve seen several
implementation examples and even nice web pages (i.e. the examples shipped with
thepython-fuse
module Stavros’
passthrough filesystem,
Dave Filesystem based upon, and further explaining,
Stavros’,
and several others) explaining how to provide basic functionality. I found a
particularly useful presentation by Matteo
Bertozzi presented
~15 years ago at PyCon4… But none of those is IMO followable enough by
itself. Also, most of them are very old (maybe the world is telling me
something that I refuse to understand?).
And of course, there isn’t a single interface to work from. In Python only, we can find python-fuse, Pyfuse, Fusepy… Where to start from?
…So I setup to try and help.
Over the past couple of weeks, I have been slowly working on my own version, and presenting it as a progressive set of tasks, adding filesystem calls, and being careful to thoroughly document what I write (but… maybe my documentation ends up obfuscating the intent? I hope not — and, read on, I’ve provided some remediation).
I registered a GitLab project for a hand-holding guide to writing FUSE-based filesystems in Python. This is a project where I present several working FUSE filesystem implementations, some of them RAM-based, some passthrough-based, and I intend to add to this also filesystems backed on pseudo-block-devices (for implementations such as my FIUnamFS).
So far, I have added five stepwise pieces, starting from the barest possible
empty
filesystem,
and adding system calls (and functionality) until (so far) either a read-write
filesystem in RAM with basicstat()
support
or a read-only passthrough
filesystem.
I think providing fun or useful examples is also a good way to get students to use what I’m teaching, so I’ve added some ideas I’ve had: DNS Filesystem, on-the-fly markdown compiling filesystem, unzip filesystem and uncomment filesystem.
They all provide something that could be seen as useful, in a way that’s easy to
teach, in just some tens of lines. And, in case my comments/documentation are
too long to read, uncommentfs
will happily strip all comments and whitespace
automatically! 😉
So… I will be delivering my talk tomorrow (2024.10.10, 18:30 GMT-6) at ChiPy (virtually). I am also presenting this talk virtually at Jornadas Regionales de Software Libre in Santa Fe, Argentina, next week (virtually as well). And also in November, in person, at nerdear.la, that will be held in Mexico City for the first time.
Of course, I will also share this project with my students in the next couple of weeks… And hope it manages to lure them into implementing FUSE in Python. At some point, I shall report!
Update: After delivering my ChiPy talk, I have uploaded it to YouTube: A hand-holding guide to writing FUSE-based filesystems in Python, and after presenting at Jornadas Regionales, I present you the video in Spanish here: Aprendiendo y enseñando a escribir sistemas de archivo en espacio de usuario con FUSE y Python.
September 21, 2024
Gwolf
50 years of queries
This post is a review for Computing Reviews for 50 years of queries , a article published in Communications of the ACM
The relational model is probably the one innovation that brought computers to the mainstream for business users. This article by Donald Chamberlin, creator of one of the first query languages (that evolved into the ubiquitous SQL), presents its history as a commemoration of the 50th anniversary of his publication of said query language.
The article begins by giving background on information processing before the advent of today’s database management systems: with systems storing and processing information based on sequential-only magnetic tapes in the 1950s, adopting a record-based, fixed-format filing system was far from natural. The late 1960s and early 1970s saw many fundamental advances, among which one of the best known is E. F. Codd’s relational model. The first five pages (out of 12) present the evolution of the data management community up to the 1974 SIGFIDET conference. This conference was so important in the eyes of the author that, in his words, it is the event that “starts the clock” on 50 years of relational databases.
The second part of the article tells about the growth of the structured English query language (SEQUEL)– eventually renamed SQL–including the importance of its standardization and its presence in commercial products as the dominant database language since the late 1970s. Chamberlin presents short histories of the various implementations, many of which remain dominant names today, that is, Oracle, Informix, and DB2. Entering the 1990s, open-source communities introduced MySQL, PostgreSQL, and SQLite.
The final part of the article presents controversies and criticisms related to SQL and the relational database model as a whole. Chamberlin presents the main points of controversy throughout the years: 1) the SQL language lacks orthogonality; 2) SQL tables, unlike formal relations, might contain null values; and 3) SQL tables, unlike formal relations, may contain duplicate rows. He explains the issues and tradeoffs that guided the language design as it unfolded. Finally, a section presents several points that explain how SQL and the relational model have remained, for 50 years, a “winning concept,” as well as some thoughts regarding the NoSQL movement that gained traction in the 2010s.
This article is written with clear language and structure, making it easy and pleasant to read. It does not drive a technical point, but instead is a recap on half a century of developments in one of the fields most important to the commercial development of computing, written by one of the greatest authorities on the topic.
September 02, 2024
Gwolf
Free and open source software and other market failures
This post is a review for Computing Reviews for Free and open source software and other market failures , a article published in Communications of the ACM
Understanding the free and open-source software (FOSS) movement has, since its beginning, implied crossing many disciplinary boundaries. This article describes FOSS’s history, explaining its undeniable success throughout the 1990s, and why the movement today feels in a way as if it were on autopilot, lacking the “steam” it once had.
The author presents several examples of different industries where, as it happened with FOSS in computing, fundamental innovations happened not because the leading companies of each field are attentive to customers’ needs, but to a certain degree, despite them not even considering those needs, it is typically due to the hubris that comes from being a market leader.
Kemp exemplifies his hypothesis by presenting the messy landscape of the commercial, mutually incompatible systems of Unix in the 1980s. Different companies had set out to implement their particular flavor of “open Unix computers,” but with clear examples of vendor lock-in techniques. He speculates that, “if we had been able to buy a reasonably priced and solid Unix for our 32-bit PCs … nobody would be running FreeBSD or Linux today, except possibly as an obscure hobby.” He states that the FOSS movement was born out of the utter market failure of the different Unix vendors.
The focus of the article shifts then to the FOSS movement itself: 25 years ago, as FOSS systems slowly gained acceptance and then adoption in the “serious market” and at the center of the dot-com boom of the early 2000s, Linux user groups (LUGs) with tens of thousands of members bloomed throughout the world; knowing this history, why have all but a few of them vanished into oblivion?
Kemp suggests that the strength and vitality that LUGs had ultimately reflects the anger that prompted technical users to take the situation into their own hands and fix it; once the software industry was forced to change, the strongly cohesive FOSS movement diluted. “The frustrations and anger of [information technology, IT] in 2024,” Kamp writes, “are entirely different from those of 1991.” As an example, the author closes by citing the difficulty of maintaining–despite having the resources to do so–an aging legacy codebase that needs to continue working year after year.
August 18, 2024
Gwolf
The social media my blog –as well as some other sites I publish in– is pushed to will soon stop receiving updates
For many years, I have been using the dlvr.it service to echo my online activity to where more people can follow it. Namely, I write in the following sources:
- My blog (where this content is being posted to) → RSS
- Mostly academic publications I send to my university’s repository (including conference presentations and the like) → RSS
- Videos posted to my YouTube channel (mostly my classes but some other material as well) → RSS
Via dlvr.it’s services, all those posts are “echoed” to Gwolfwolf in X (Twitter) and to the Gunnarwolfi page in Facebook. I use neither platform as a human (that is, I never log in there).
Anyway, dlvr.it sent me a mail stating they would be soon (as in, the next few weeks) cutting their free tier. And, although I value their services and am thankfulfor their value so far, I am not going to pay for my personal stuff to be reposted to social media.
So, this post’s mission is twofold:
- If you follow me via any of those media, you will soon not be following me anymore 😉
- If you know of any service that would fill the space left by
dlvr.it, I will be very grateful. Extra gratefulness
points if the option you suggest is able to post to accounts in
less-propietary media (i.e. the Fediverse). Please tell me by mail
(
gwolf@gwolf.org
).
Oh! Forgot to mention: Of course, my blog will continue to be appear in Planet Debian, Blografía, and any decent aggregator that consumes my RSS.
August 01, 2024
Victor Martínez
Recuperación de materiales didácticos en Flash: Aprendamos Náhuatl
Y bueno este año participamos un poco menos en en el CHAT 2024 [1] y aún no he terminado de ver las ponencias que me interesan [2] y un poco por promocionar el evento tengo un par de invitaciones en UPN para explicar un tanto el uso de IA en educación, y aunque tengo colegas que han escrito e investigado más la respecto parece ser que el tema de este año en realidad ha captado la atención aunque no sea de lo que presenté.
De lo que si hice fue recuperar el proyecto que estuve haciendo el año pasado recuperando un material desarrollado en Flash, cuando platique al principio del mismo pensé que era uno aún más viejo que funcionaba con Authorware y que me pareció sencillo hacer correr, resultó que era uno que no conocía que funcionaba en Web y pues en 2023 de abril a octubre estuve haciendo lo que viene en el video y en la presentación [3].
En resumen, Adobe anunció el abandono de Flash como tecnología con mucho tiempo, en nuestro caso recuperar los contenidos era lo primero, como no se contaba con el fuente no se pudo hacer una migración Flash SWF – Animate HTML5, se encontró una opción libre y moderna, Ruffle que resuelve los problemas que llevó a Adobe a abandonar Flash, pero que crea algunos otros al no implementar TODO lo que hacia el producto original, especialmente los errores de diseño y varias practicas poco seguras.
No es una guia de como hacer, pero usando Ruffle mucho del software educativo en que ya se invirtió tiempo y que no se cuenta con los fuentes o con los recursos para re implementarlo, puede ser recuperado ya sea como recurso histórico o para extender su uso y hasta cierta medida modificado y puesto al día con bastante trabajo.
Ah, el interactivo para su consulta [4].
[1] https://chat.iztacala.unam.mx/elchat/6
[2] https://www.youtube.com/@CHATIztacala/streams
[3] http://blografia.net/vicm3/wp-content/uploads/2024/07/Recuperacion-de-material-didactico-en-flash.pdf
[4] http://linux.ajusco.upn.mx/~vicm3/nahuatl/
July 24, 2024
Gwolf
DebConf24 Continuous Key-Signing Party
🎉🥳🤡🎂🍥 Yay, party! 🎉🥳🤡🎂🍥
🎉🥳🤡🎂🍥 Yay, crypto! 🎉🥳🤡🎂🍥
DebCamp has started, and in a couple of days, we will fully be in DebConf24 mode!
As most of you know, an important part that binds Debian together is our cryptographic identity assurance, and that is in good measure tightened by the Continuous Key-Signing Parties we hold at DebConfs and other Debian and Free Software gatherings.
As I have done during (most of) the past DebConfs, I have prepared a set of pseudo-social maps to help you find where you are in the OpenPGP mesh of our conference. Naturally, Web-of-Trust maps should be user-centered, so find your own at:
https://people.debian.org/~gwolf/dc24_ksp/
The list is now final and it will not receive any modifications (I asked for them some days ago); if your name still appears on the list and you don’t want to be linked to the DC24 KSP in the future, tell me and I’ll remove it from future versions of the list (but it is part of the final DC24 file, as its checksum is already final)
Speaking of which!
If you are to be a part of the keysigning, get the final DC24 file and, on a device you trust, check its SHA256 by running:
$ sha256sum dc24_fprs.txt
11daadc0e435cb32f734307b091905d4236cdf82e3b84f43cde80ef1816370a5 dc24_fprs.txt
Make sure the resulting number matches the one I’m presenting. If it doesn’t, ensure your copy of the file is not corrupted (i.e. download again). If it still doess not match, notify me immediately.
Does any of the above confuse you? Please come to (or at least, follow the stream for) my session on DebConf opening day, Continuous Key-Signing Party introduction, 10:30 Korean time; I will do my best to explain the details to you.
PS- I will soon provide a simple, short PDF that will probably be mass-printed at FrontDesk so that you can easily track your KSP progress.
July 17, 2024
Gwolf
Script for weather reporting in Waybar
While I was living in Argentina, we (my family) found ourselves checking for weather forecasts almost constantly — weather there can be quite unexpected, much more so that here in Mexico. So it took me a bit of tinkering to come up with a couple of simple scripts to show the weather forecast as part of my Waybar setup. I haven’t cared to share with anybody, as I believe them to be quite trivial and quite dirty.
But today, Víctor was asking for some slightly-related things, so here I go. Please do remember I warned: Dirty.
I am using OpenWeather’s open
API. I had to register to get an APPID, and it
allows me for up to 1,000 API calls per day, more than plenty for my uses, even
if I am logged in at my desktops at three different computers (not an uncommon
situation). Having that, I set up a file named /etc/get_weather/
, that
currently reads:
# Home, Mexico City
LAT=19.3364
LONG=-99.1819
# # Home, Paraná, Argentina
# LAT=-31.7208
# LONG=-60.5317
# # PKNU, Busan, South Korea
# LAT=35.1339
#LONG=129.1055
APPID=SomeLongRandomStringIAmNotSharing
Then, I have a simple script, /usr/local/bin/get_weather
, that fetches the
current weather and the forecast, and stores them as /run/weather.json
and
/run/forecast.json
:
#!/usr/bin/bash
CONF_FILE=/etc/get_weather
if [ -e "$CONF_FILE" ]; then
. "$CONF_FILE"
else
echo "Configuration file $CONF_FILE not found"
exit 1
fi
if [ -z "$LAT" -o -z "$LONG" -o -z "$APPID" ]; then
echo "Configuration file must declare latitude (LAT), longitude (LONG) "
echo "and app ID (APPID)."
exit 1
fi
CURRENT=/run/weather.json
FORECAST=/run/forecast.json
wget -q "https://api.openweathermap.org/data/2.5/weather?lat=${LAT}&lon=${LONG}&units=metric&appid=${APPID}" -O "${CURRENT}"
wget -q "https://api.openweathermap.org/data/2.5/forecast?lat=${LAT}&lon=${LONG}&units=metric&appid=${APPID}" -O "${FORECAST}"
This script is called by the corresponding systemd service unit, found at
/etc/systemd/system/get_weather.service
:
[Unit]
Description=Get the current weather
[Service]
Type=oneshot
ExecStart=/usr/local/bin/get_weather
And it is run every 15 minutes via the following systemd timer unit,
/etc/systemd/system/get_weather.timer
:
[Unit]
Description=Get the current weather every 15 minutes
[Timer]
OnCalendar=*:00/15:00
Unit=get_weather.service
[Install]
WantedBy=multi-user.target
(yes, it runs even if I’m not logged in, wasting some of my free API calls… but within reason)
Then, I declare a "custom/weather"
module in the desired position of my
~/.config/waybar/waybar.config
, and define it as:
"custom/weather": {
"exec": "while true;do /home/gwolf/bin/parse_weather.rb;sleep 10; done",
"return-type": "json",
},
This script basically morphs a generic weather JSON description into another set of JSON bits that display my weather in the way I prefer to have it displayed as:
#!/usr/bin/ruby
require 'json'
Sources = {:weather => '/run/weather.json',
:forecast => '/run/forecast.json'
}
Icons = {'01d' => '🌞', # d → day
'01n' => '🌃', # n → night
'02d' => '🌤️',
'02n' => '🌥',
'03d' => '☁️',
'03n' => '🌤',
'04d' => '☁️',
'04n' => '🌤',
'09d' => '🌧️',
'10n' => '🌧 ',
'10d' => '🌦️',
'13d' => '❄️',
'50d' => '🌫️'
}
ret = {'text': nil, 'tooltip': nil, 'class': 'weather', 'percentage': 100}
# Current weather report: Main text of the module
begin
weather = JSON.parse(open(Sources[:weather],'r').read)
loc_name = weather['name']
icon = Icons[weather['weather'][0]['icon']] || '?' + f['weather'][0]['icon'] + f['weather'][0]['main']
temp = weather['main']['temp']
sens = weather['main']['feels_like']
hum = weather['main']['humidity']
wind_vel = weather['wind']['speed']
wind_dir = weather['wind']['deg']
portions = {}
portions[:loc] = loc_name
portions[:temp] = '%s 🌡%2.2f°C (%2.2f)' % [icon, temp, sens]
portions[:hum] = '💧 %2d%%' % hum
portions[:wind] = '🌬%2.2fm/s %d°' % [wind_vel, wind_dir]
ret['text'] = [:loc, :temp, :hum, :wind].map {|p| portions[p]}.join(' ')
rescue => err
ret['text'] = 'Could not process weather file (%s ⇒ %s: %s)' % [Sources[:weather], err.class, err.to_s]
end
# Weather prevision for the following hours/days
begin
cast = []
forecast = JSON.parse(open(Sources[:forecast], 'r').read)
min = ''
max = ''
day=Time.now.strftime('%Y.%m.%d')
by_day = {}
forecast['list'].each_with_index do |f,i|
by_day[day] ||= []
time = Time.at(f['dt'])
time_lbl = '%02d:%02d' % [time.hour, time.min]
icon = Icons[f['weather'][0]['icon']] || '?' + f['weather'][0]['icon'] + f['weather'][0]['main']
by_day[day] << f['main']['temp']
if time.hour == 0
min = '%2.2f' % by_day[day].min
max = '%2.2f' % by_day[day].max
cast << ' ↑ min: <b>%s°C</b> max: <b>%s°C</b>' % [min, max]
day = time.strftime('%Y.%m.%d')
cast << ' ┍━━━━━┫ <b>%04d.%02d.%02d</b> ┠━━━━━┑' %
[time.year, time.month, time.day]
end
cast << '%s | %2.2f°C | 🌢%2d%% | %s %s' % [time_lbl,
f['main']['temp'],
f['main']['humidity'],
icon,
f['weather'][0]['description']
]
end
cast << ' ↑ min: <b>%s</b>°C max: <b>%s°C</b>' % [min, max]
ret['tooltip'] = cast.join("\n")
rescue => err
ret['tooltip'] = 'Could not process forecast file (%s ⇒ %s)' % [Sources[:forecast], err.class, err.to_s]
end
# Print out the result for Waybar to process
puts ret.to_json
The end result? Nothing too stunning, but definitively something I find useful and even nicely laid out:
Do note that it seems OpenWeather will return the name of the closest available meteorology station with (most?) recent data — for my home, I often get Ciudad Universitaria, but sometimes Coyoacán or even San Ángel Inn.
Victor Martínez
Redshift
Hace algún tiempo vi en el celular en Android lo de cambiar de temperatura de color, bien probable que en 2019 o algo así, para ayudar a el ajuste de ojos a la luz azul, cambiando por tono rojo y posiblemente ayudar un poco para no afectar el ciclo de sueño, seguro en el mundo de Windows y Mac hace tiempo también está implementado, busque en su momento en Debian y encontré Redshift [1] y Redshift-gtk [2] que hacen lo que varias aplicaciones especialmente me gustó porque toma los datos de ubicación de la red de un servicio llamado Geoclue [3], que utiliza el servicio MLS [4] de la Fundación Mozilla, este año empezó a fallar y de pronto dejó de funcionar junio de este año dejo de funcionar y revisando en los Bugs del paquete encontré reportes parecidos pero no igual y no supe cómo darle formato, así que lo voy a poner como entrada a ver si algún debianero me ayuda a darle el formato correcto para Bug.
Redshift soporta colocar manual la ubicación en la documentación viene como poner la localización manualmente y muy pequeñito dice que las longitudes en América y la latitud en el cono sur son negativas esto viene a cuento porque en las búsquedas, un par de máquinas de búsqueda reportan el absoluto, si uno no abre el documento completo y lee con atención.
¿Qué pasó?
Resulta que MLS ha ido siendo abandonado y en marzo de este año dejó de resolver peticiones [5], entonces Geoclue-2.0 dejó de funcionar y finalmente lo hizo Redshift.
Solución, colocar la latitud y longitud de manera manual, el ejemplo en la documentación viene Copenage y es muy bueno, ya en man dice:
Your current location, in degrees, given as floating point numbers, to‐
wards north and east, with negative numbers representing south and west,
respectively.
O como lo dice el sitio web más de manera más sensible para el que tiene prisa [6]
When you specify a location manually, note that a location south of equator has a negative latitude and a location west of Greenwich (e.g the Americas) has a negative longitude.
Entonces según leo, va a ser difícil que actualicen Redshift ya que no ha tenido gran cambio desde 2018, pero puede uno arreglar el asunto ya sea como en mi caso creando un archivo en .config/redshift.conf con los datos correctos, ya que vamos a obtener 19.4326° N, 99.1332° W es decir en decimal, 19.43 y -99.13 que si los entramos solo como enteros terminamos en lugar de en ciudad de México en algún lugar de Tailandia y con el reloj volteado lo cual me trajo tres días con la pantalla en un tono muy azul ya entrada la noche.
O como lo estuve haciendo a mano, cuando no entendía que me faltaba lo de los negativos, por ejemplo si está uno de viaje puede uno (sé que pudiera uno crearse varias configuraciones para las ciudades que va a visitar pero me da más fiaca).
$redshift -O 3900 # night / one time
$redshift -x # reset
redshift -l 55.7:12.6 -t 5700:3600 -g 0.8 -m randr -v # Example for Copenhagen, Denmark, from man
redshift -l 19.43:-99.13 -t 6700:3800 -m randr -v # Mexico City, note the – sign
Entonces, que aunque supongo que lo mejor sería levantar el ticket en redshift o redshift-gtk y darle +packages o +depends geoclue-2.0 como no recuerdo bien como es y en ingles no creo que quede como esta entrada, prefiero ponerlo acá mientras veo si lo puedo sintetizar en ingles.
[1] http://jonls.dk/redshift/
[2] https://packages.debian.org/bookworm/redshift
[3] https://gitlab.freedesktop.org/geoclue/geoclue/wikis/home
[4] https://en.wikipedia.org/wiki/Mozilla_Location_Service
[5] https://discourse.mozilla.org/t/retiring-the-mozilla-location-service/128693
[6] http://jonls.dk/redshift/
Gwolf
Scholarly spam • «Wulfenia»
I just got one of those utterly funny spam messages… And yes, I recognize everybody likes building a name for themselves. But some spammers are downright silly.
I just got the following mail:
From: Hermine Wolf <hwolf850@gmail.com>
To: me, obviously 😉
Date: Mon, 15 Jul 2024 22:18:58 -0700
Subject: Make sure that your manuscript gets indexed and showcased in the prestigious Scopus database soon.
Message-ID: <CAEZZb3XCXSc_YOeR7KtnoSK4i3OhD=FH7u+A5xSMsYvhQZojQA@mail.gmail.com>
This message has visual elements included. If they don't display, please
update your email preferences.
*Dear Esteemed Author,*
Upon careful examination of your recent research articles available online,
we are excited to invite you to submit your latest work to our esteemed
journal, '*WULFENIA*'. Renowned for upholding high standards of excellence
in peer-reviewed academic research spanning various fields, our journal is
committed to promoting innovative ideas and driving advancements in
theoretical and applied sciences, engineering, natural sciences, and social
sciences. 'WULFENIA' takes pride in its impressive 5-year impact factor of
*1.000* and is highly respected in prestigious databases including the
Science Citation Index Expanded (ISI Thomson Reuters), Index Copernicus,
Elsevier BIOBASE, and BIOSIS Previews.
*Wulfenia submission page:*
[image: research--check.png][image: scrutiny-table-chat.png][image:
exchange-check.png][image: interaction.png]
.
Please don't reply to this email
We sincerely value your consideration of 'WULFENIA' as a platform to
present your scholarly work. We eagerly anticipate receiving your valuable
contributions.
*Best regards,*
Professor Dr. Vienna S. Franz
Who cares what Wulfenia is about? It’s about you, my stupid Wolf cousin!
June 26, 2024
Gwolf
Many terabytes for students to play with. Thanks Debian!
My students at LIDSOL (Laboratorio de Investigación y Desarrollo de Software Libre, Free Software Research and Development Lab) at Facultad de Ingeniería, UNAM asked me to help them get the needed hardware to set up a mirror for various free software projects. We have some decent servers (not new servers, but mirrors don’t require to be top-performance), so…
A couple of weeks ago, I approached the Debian Project Leader (DPL) and suggested we should buy two 16TBhard drives for this project, as it is the most reasonable cost per byte point I found. He agreed, and I bought the drives. Today we had a lab meeting, and I handed them over the hardware.
Of course, they are very happy and thankful with the Debian project 😃 In the last couple of weeks, they have already set up an Archlinux mirror (https://archlinux.org/mirrors/fi-b.unam.mx), and now that they have heaps of storage space, plans are underway to set up various other mirrors (of course, a Debian mirror will be among the first).
June 25, 2024
Gwolf
Find my device - Whether you like it or not
I received a mail today from Google
(noreply-findmydevice@google.com
) notifying me that they would
unconditionally enable the Find my device functionality I have been
repeatedly marking as unwanted in my Android phone.
The mail goes on to explain this functionality works even when the device is disconnected, by Bluetooth signals (aha, so “turn off Bluetooth” will no longer turn off Bluetooth? Hmmm…)
Of course, the mail hand-waves that only I can know the location of my device. «Google cannot see or use it for other ends». First, should we trust this blanket statement? Second, the fact they don’t do it now… means they won’t ever? Not even if law enforcement requires them to? The devices will be generating this information whether we want it or not, so… it’s just a matter of opening the required window.
Of course, it is a feature many people will appreciate and find useful. And it’s not only for finding lost (or stolen) phones, but the mail also mentions tags can be made available to store in your wallet, bike, keys or whatever. But it should be opt-in. As it is, it seems it’s not even to opt out of it.
April 25, 2024
Victor Martínez
¿Polemica o tal vez no?
Recien vi una polémica porque por ahí se dijo, no se bien en que medio, pero buscando encontre el podcast de Índigo Geek
Que al parecer es este, como no me dejó comentar, pongo aquí el mapa de contenido.
1:10 Jośe Saucedo, bienvenida
1:51 Inicio, no se presentan, lo harán en el minuto 43 con sus redes de contacto
2:21 Rodrigo Chavez, lider de marketing de CCXP
3:21 que es la CCXP 3:50 no queda claro
4:05 Mucho tiempo…
4:22 Cuatro años de planeación…
4:49 Publico mexicano esperando la comicon
5:00 Uds están siendo pioneros…
5:16 Eventos anteriores
6:21 Javier Ibarreche, los urigañis del norte…
6:21 a 8:00 invitados confirmados
8:26 Dos por uno de ocesa
8:38 Todo mundo quiere una ComicCon pero no esta dispuesto a pagar por ello
9:50 Primer año,
22:46 fin de bloque
23:05 Reseña de juego: Outward Definitive Edition
26:22 a 42:50 Segundo bloque, discusión
43:00 Medios de contacto @indogeekmx en todos lados @accres94 @c_bits
43:43 Creditos
Lo que es polémico es tal vez la mención al minuto cinco, lo más curioso es que al final vienen los créditos y al inicio se presenta quien lleva la batuta José Saucedo, quien tiene una larga experiencia en el medio…
Una cosa que les doy lata en la Cobacha es que mencionen quien habla y como contactarlo al menos al inicio y al final, este capítulo es una muestra de porqué se necesita hacer, no sé muy bien quien integra el panel salvo José Saucedo y el invitado Rodrigo Chavez lider de marketing de CCPX, rascandole parece que quien hace la intervención es Axel Amezquita de Reporte Indigo/IGN/Plubimetro/Badgame (https://www.reporteindigo.com/author/axel-amezquita/)
Yo me enteré por esta publicación:
No sé quién hizo la entrevista sobre la CCXP para Índigo Geek pero Stan Lee vino a México en 1996 y 2017. No sé de dónde sacan que es la primera vez que hay talento internacional en una convención
— G.D.E: GREAT DOCTOR EDOZUKA (@Edo_Granpa) April 23, 2024
En fin que hasta yendo a la wikipedia a buscar editorial vid uno puede leer lo siguiente:
«Es de destacar que grandes personalidades y autores de la Industria del Cómic y de la Ciencia Ficción se dieron cita en ellas, incluyendo a Dennis O´Neil, entonces el Editor de los títulos de Batman para DC Comics, y dibujantes y escritores como Dan Jurgens (La Muerte de Superman), Jon Bogdanove, Louise Simonson o Todd McFarlane (creador de Spawn).»
Enlace a la versión consultada https://es.wikipedia.org/w/index.php?title=Grupo_Editorial_Vid&oldid=157274740
Pero se me ocurren muchos más, en la CONQUE, leyendo en este mismo blog en Utopia2003 [1], Mole y más, ahora si decimos que panel como los de ComicCon, tampoco es tan cierto en Conque, en la RocaPoca, en la feria de la historieta (al igual que algunas pifias como la saga de hades, cuando era BTx lo que vendían circa 1998) [1.5] y otros eventos ha habido primicias, en el IPN han habido excelentes conferencias [2], en el Colmex de la cual hay video [3] y por supuesto la propia reseña de la Mecyf [4], Conque [5], Mole 2000 [6]
[1] https://animeproject.org/2003/12/utopia-2003/
[1.5] https://animeproject.org/ap/hades.htm
[2] https://animeproject.org/2016/02/la-gran-revolucion-cultural-invisible-que-fueron-los-anos-80-por-eiji-otsuka/
[3] https://animeproject.org/2004/03/dos-mulas-japonesas-entre-burros-blancos/
[4] https://blografia.net/vicm3/1998/05/la-mecyf/
[5] https://animeproject.org/ap/conque2001.htm y https://animeproject.org/ap/conque99.htm
[6] https://animeproject.org/ap/mole2000.htm