"id","url","name","score","phase","progress","summary","hashtag","image_url","source_url","webpage_url","autotext_url","download_url","contact_url","logo_color","logo_icon","excerpt","created_at","updated_at","team","maintainer","event_url","event_name","autotext","longtext" "8","https://frictionless.dribdat.cc/project/8","Data package manager for CKAN (dpckan)","71","Launching","30","dpckan is a tool for initial publication and incremental updates of datasets described with Frictionless Standards in a CKAN instance","","https://avatars.githubusercontent.com/u/65427309?v=4","https://github.com/dados-mg/dpckan","","https://github.com/dados-mg/dpckan","https://github.com/dados-mg/dpckan/releases","https://github.com/dados-mg/dpckan/issues","","","dpckan is being used to manage the Open Data Portal of the State of Minas Gerais/Brazil (https://dados.mg.gov.br/). The goal of the hackathon project is to further develop dpckan to allow for - Dry run updates to show diff of changed resource properties - Smart sync between data package and CKAN dataset (ie. automatically determine resources that should be updated) - Loading of data in datastore respecting table schema information The bulk of the work will make use of frictionless.py,...","2021-09-07T20:24","2021-10-08T18:56","andrelamor, fjuniorr, GabrielBraicoDornas, marcelapires, avdata99, carolvettor, Daniel","fjuniorr","https://frictionless.dribdat.cc/event/1","Frictionless Hackathon","# Data package manager para CKAN (dpckan) O `dpckan` é um pacote Python, acessível via interface [CLI](https://pt.wikipedia.org/wiki/Interface_de_linha_de_comandos), utilizado para criação e atualização de conjuntos de dados e recursos (documentados de acordo com o padrão de metadados [Frictionless Data](https://frictionlessdata.io/)) em uma instância do [CKAN](https://ckan.org/). Curiosidades: Consulte a comparação do dpckan com alguns [projetos relacionados](RELATED_PROJECTS.md). [Documentação complementar](https://dpckan.readthedocs.io/en/latest/) ## Instalação O `dpckan` está disponível no Python Package Index - [PyPI](https://pypi.org/project/dpckan/) e pode ser instalado utilizando-se o comando abaixo: ```bash # Antes de executar o comando abaixo lembre-se que ambiente Python deverá estar ativo $ pip install dpckan ``` ## Configuração de Variáveis de ambiente Todos os comandos exigem a indicação de uma instância CKAN (ex: https://demo.ckan.org/) e de uma chave válida para autenticação na referida instância. Esta indicação deverá ser realizada através do cadastro de variáveis de ambiente. Para invocação CLI de qualquer comando sem a necessidade de indicar explicitamente estas variáveis recomenda-se utilização dos nomes `CKAN_HOST` e `CKAN_KEY` para cadastro de instância e chave respectivamente. Caso outros nomes sejam utilizados, necessário indicar explicitamente durante a chamada da função desejada, utilizando-se as flags ""--ckan-host"" e ""--ckan-key"", conforme demostrado abaixo e ou de maneira mais detalhada na sessão [Uso](#uso). ```bash # CKAN_HOST=https://demo.ckan.org/ # CKAN_KEY=CC850181-6ZS9-4f4C-bf3f-fb4db7ce09f90 (Chave CKAN meramente ilustrativa) # Utilização sem necessidade de indicar explicitamente variáveis $ dpckan dataset create # CKAN_HOST_PRODUCAO=https://demo.ckan.org/ # CKAN_KEY_PRODUCAO=CC850181-6ZS9-4f4C-bf3f-fb4db7ce09f90 (Chave CKAN meramente ilustrativa) # Utilização indicando explicitamente variáveis, através flags --ckan-host e --ckan-key $ dpckan dataset create --ckan-host $CKAN_HOST_PRODUCAO --ckan-key $CKAN_KEY_PRODUCAO ``` O cadastro das variáveis de ambiente `CKAN_HOST` e `CKAN_KEY`, necessárias para invocação de cada comando, deverá ser realizada conforme sistema operacional do usuário. Abaixo links de referência para tal: * [Windows](https://professor-falken.com/pt/windows/como-configurar-la-ruta-y-las-variables-de-entorno-en-windows-10/) * [Linux](https://ricardo-reis.medium.com/vari%C3%A1veis-de-ambiente-no-linux-debian-f677d6ca94c) * [Mac](https://support.apple.com/pt-br/guide/terminal/apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac) Alternativamente, o cadastro destas variáveis de ambiente poderá ser realizado em arquivo "".env"", na raiz do conjunto de dados, sendo necessário a inclusão deste "".env"" em arquivo "".gitignore"", evitando assim a sincronização e consequente publicização destas chaves em repositórios online como [github](https://github.com/), conforme demostrado abaixo: ```bash # SOMENTE UTILIZE A OPÇÃO SUGERIDA ABAIXO SE POSSUIR FAMILIARIDADE COM O ASSUNTO, EVITANDO ASSIM PROBLEMAS COM ACESSO DE TERCEIROS NÃO AUTORIZADOS EM SUA INSTÂNCIA CKAN # CUIDADO: SOMENTE EXECUTE OS COMANDOS ABAIXO SE OS ARQUIVO "".env"" e "".gitignore"" NÃO EXISTIREM NA RAIZ DO CONJUNTO DE DADOS # CUIDADO: CASO COMANDOS ABAIXO SEJAM EXECUTADOS COM "".env"" e "".gitignore"" EXISTENTES TODO CONTEÚDO DOS MESMOS SERÁ APAGADO # CUIDADO: SOMENTE EXECUTE OS COMANDOS ABAIXO SE TIVER CERTEZA E CONHECIMENTO DO QUE SERÁ FEITO # Crie arquivo "".env"" com estrutura para receber chaves CKAN_HOST e CKAN_KEY # Após a criação, abra o arquivo e inclua os valores para cada variável $ echo ""CKAN_HOST=''\nCKAN_KEY=''"" > .env # Crie arquivo "".gitignore"" com configuração para excluir arquivo "".env"" do controle de versão git $ echo "".env"" > .gitignore # Confira se configuração foi realizada com sucesso # Comando abaixo deverá mostrar apenas criação/modificação de arquivo "".gitignore"", não sendo apresentado nada para arquivo "".env"" $ git status ``` ## Uso **AVISO: VERIFIQUE AS VARIÁVEIS DE AMBIENTE E O CAMINHO DOS ARQUIVOS ANTES DE EXECUTAR CADA COMANDO. NÃO COPIE E COLE O CÓDIGO CEGAMENTE!** ### Acessando documentação do dpckan via terminal ```bash # Informações gerais sobre o pacote e seus comandos # Utilização das flags --help ou -h retornará o mesmo resultado $ dpckan # Informações sobre comandos dataset e resource # Utilização das flags --help ou -h retornará o mesmo resultado $ dpckan dataset $ dpckan resource # Informações sobre subcomandos dataset # Utilização da flag -h retornará o mesmo resultado $ dpckan dataset create --help $ dpckan dataset update --help # Informações sobre subcomandos resource # Utilização da flag -h retornará o mesmo resultado $ dpckan resource create --help $ dpckan resource update --help ``` ### Criando e atualizando com um conjunto de dados via terminal - Para criar um conjunto de dados, execute o comando no diretório aonde o arquivo datapackage.json se encontra: ```bash $ dpckan dataset create ``` - E para atualizar o conjunto de dados, execute o comando no diretório aonde o arquivo datapackage.json se encontra: ```bash $ dpckan dataset update ``` ### Criando e atualizando recursos via terminal - Para criar um recurso, execute o seguinte comando no diretório aonde o arquivo datapackage.json se encontra. Não se esqueça de modificar o último argumento com o nome do recurso presente no arquivo datapackage.json que será criado ```bash $ dpckan resource create --resource-name nome-recurso # Utilização alias -rn para flag --resource-name $ dpckan resource create -rn nome-recurso ``` - Para atualizar um recurso, execute o seguinte comando no diretório aonde o arquivo datapackage.json se encontra. Não se esqueça de modificar os últimos argumentos com o nome e id do recurso presente no arquivo datapackage.json que será atualizado ```bash # Utilização flags --resource-name e --resource-id $ dpckan resource update --resource-name nome-recurso --resource-id id-recurso # Utilização alias -rn e -id para flags --resource-name e --resource-id respectivamente $ dpckan resource update -rn nome-recurso -id id-recurso ``` ### Usando as flags - É possível atualizar um conjunto de dados ou recurso fora do diretório onde o arquivo datapackage.json se encontra utilizando a flag `--datapackage` ou `-dp` como abaixo: ```bash # Utilização flag --datapackage $ dpckan resource update --datapackage local/path/para/datapackage.json --resource-name nome-recurso --resource-id id-recurso # Utilização alias -dp, -rn e -id para flags --datapackage,--resource-name e --resource-id respectivamente $ dpckan resource update -dp local/path/para/datapackage.json -rn nome-recurso -id id-recurso ``` - Podemos usar as flags `-H` para o `CKAN_HOST`, `-k` para o `CKAN_KEY`, `-rn` para o `--resource_name` e `-id` para o `--resource_id`, por exemplo: ```bash # Utilização flags --ckan-host, --ckan-key, --resource-name e --resource-id $ dpckan resource update --ckan-host $CKAN_HOST_PRODUCAO --ckan-key $CKAN_KEY_PRODUCAO --resource-name nome-recurso --resource-id id-recurso # Utilização alias -H, -k, -rn e -id para flags --ckan-host, --ckan-key, --resource-name e --resource-id respectivamente $ dpckan resource update -H $CKAN_HOST_PRODUCAO -k $CKAN_KEY_PRODUCAO -rn nome-recurso -id id-rescurso ``` Para mais exemplos, consulte a [documentação](https://dpckan.readthedocs.io/en/latest/) ## Desenvolvimento ### Contribuir para o projeto - Prerequisitos: - Python 3.9 ou superior - [Documentação de referência mostrando procedimentos necessários para contribuição em um projeto open source](https://www.dataschool.io/how-to-contribute-on-github/) - Passos básicos: - Crie um fork do repositório do projeto - Clone o repositório criado em sua conta após o fork - Navegue até o repositório clonado em sua máquina - Crie e ative um ambiente virtual Python para utilizar o projeto - Crie um branch para realizar as modificações necessárias - Realize o push da branch criada - Abra um PR explicando os motivos da mudança e como esta auxiliará no desenvolvimento do projeto ### Atualizar versão Conforme relatado no [issue 6](https://github.com/dados-mg/dpkgckanmg/issues/6), atualização de versões no [Pypi](https://pypi.org/) deve seguir [estes os passos](https://github.com/dados-mg/dpckan/issues/6#issuecomment-851678297) ## Licença O **dpckan** é licenciado sob a licença MIT. Veja o arquivo [`LICENSE.md`](LICENSE.md) para mais detalhes. ","dpckan is being used to manage the Open Data Portal of the State of Minas Gerais/Brazil (https://dados.mg.gov.br/). The goal of the hackathon project is to further develop dpckan to allow for - Dry run updates to show diff of changed resource properties - Smart sync between data package and CKAN dataset (ie. automatically determine resources that should be updated) - Loading of data in datastore respecting table schema information The bulk of the work will make use of frictionless.py, frictionless-ckan-mapper and ckanapi. There are other related tools in the frictionless/ckan ecosystem that shoud be at least explored for inspiration, such as - ckanext-datapackager. CKAN extension for importing/exporting Data Packages - datapackage-pipelines-ckan. Data Package Pipelines processors for CKAN - data: Command Line Tool (https://github.com/datopian/data-cli) - ckanext-xloader. Express Loader - quickly load data into DataStore. A replacement for DataPusher. - ckanext-validation. CKAN extension for validating Data Packages using Table Schema." "11","https://frictionless.dribdat.cc/project/11","Citation Context Reports","40","Researching","5","Create static reports for using the Frictionless data packages and livemark. ","","https://avatars.githubusercontent.com/u/6946077?v=4","https://github.com/Bubblbu/metrics-in-context","https://github.com/Bubblbu/metrics-in-context","https://github.com/Bubblbu/metrics-in-context","https://github.com/Bubblbu/metrics-in-context/releases","https://github.com/Bubblbu/metrics-in-context/issues/19","","","**This challenge in one video**: https://vimeo.com/625243437 ## About Metrics in Context **A one-paragraph explanation:** Scholarly metrics (e.g., citations and altmetrics) are not only used in research assessment but also increasingly power discovery services and other scholarly applications. However, we rarely ask the question where that data comes from and how it was created. A question that matters as citation data is not simply citation data. Citation data from the Web of Scienc...","2021-09-21T23:12","2022-01-07T16:33","bubblbu, Ayrton","bubblbu","https://frictionless.dribdat.cc/event/1","Frictionless Hackathon","","**This challenge in one video**: https://vimeo.com/625243437 ## About Metrics in Context **A one-paragraph explanation:** Scholarly metrics (e.g., citations and altmetrics) are not only used in research assessment but also increasingly power discovery services and other scholarly applications. However, we rarely ask the question where that data comes from and how it was created. A question that matters as citation data is not simply citation data. Citation data from the Web of Science covers different disciplines, different types of articles, and uses different extraction methods than for instance Google Scholar. [Metrics in Context](https://github.com/Bubblbu/metrics-in-context) is a Frictionless Data Tool Fund project aiming to create a *Citation Data Package* that resolves this problem by providing both citation data as well as provenance information in one data structure. **A one-page explanation:** [The README on the Github repo](https://github.com/Bubblbu/metrics-in-context). ## What are Citation Context Reports? For the Frictionless Hackathon, I propose to build a prototype of a *citation context report* for Citation Data Packages using [Livemark](https://livemark.frictionlessdata.io/). These reports would provide insights about the data at hand which go beyond the usual performance assessments, impact indicators, or metrics of excellence by making use of the very built-in provenance information. Just a few hypothetical scenarios in which citation context reports could be useful: > ""Hm... that's a lot of data that I've collected from all these super cool APIs and datasets that are all open. FAIR data FTW! I should probably look into their citation context reports to *figure out which of these indicators I can combine and use to compare groups of researchers*. Or I could just mash them all together and make it super colorful and everybody gets a fancy number they are reduced to!"" > ""Oh dang! Those citation counts are so impressive... and they started a year after me? \*imposter syndrome intensifies\* Ok, breath, let's *check if the indexed articles are representative of the disciplines* I engage with using citation context reports! Either I will feel better and simply stop the never-ending comparing and competing or be crushed by the tyranny of metrics in a neoliberal academy."" > ""Wait, what? Did they seriously just discontinue this service that powers and enables massive parts of the scholarly ecosystem because some shareholder dude wanted more money? Damn, I guess we should have started to invest in community infrastructure earlier... Anyway, for now I'll go and *see if there any other data sources with a similar citation index profile* that could meet my needs."" ### Goals There are three goals (and possible areas to contribute) for the hackathon challenge which are also tracked in their respective GitHub issues. 1. Create a prototype for citation context reports ([issue #28](https://github.com/Bubblbu/metrics-in-context/issues/28)) 2. Develop and prototype use cases for Citation Context Reports ([issue #29](https://github.com/Bubblbu/metrics-in-context/issues/29)) 3. Kickstart the Citation Index Index ([issue #30](https://github.com/Bubblbu/metrics-in-context/issues/30)) ### Contributing I am on the west coast of Canada and unfortunately going to miss most of the time to work synchronously but luckily I also happen to be totally overwhelmed with the other to-dos and tasks in my life. Therefore, we shall dispose of common conceptions of organization, synchronicity, and progress and thrive in the beautiful mess of multiple time zones, living documents, interrupting calls and roommates, and life in general. What I am trying to say: Things are messy, apologies, but please feel free and encouraged to jump in and join anywhere. The three issues linked above might be good spots to begin. " "12","https://frictionless.dribdat.cc/project/12","Frictionless Community Insights","34","Researching","5","A livemark site for data storytelling about the Frictionless Community","","https://avatars.githubusercontent.com/u/5912125?v=4","https://github.com/frictionlessdata/community-insights","https://community-insights.frictionlessdata.io","https://github.com/frictionlessdata/community-insights","https://github.com/frictionlessdata/community-insights/releases","https://github.com/frictionlessdata/community-insights/issues","","","Welcome to team ""Frictionless Community Insights""! Our goal is to create a website, built with Frictionless Livemark, that tells a story about the Frictionless community. Who is involved with Frictionless? Where are they located? What do Frictionless users want from the project? How can the Frictionless team better support the community? We'll answer these questions using data from our recent community survey. We're looking for team members that: can tell a story with data; clean data; want to l...","2021-09-29T15:27","2021-10-06T20:06","pizzaere, Ponaimo, lillywinfree, nikeshbalami, khalid_khattak","lillywinfree","https://frictionless.dribdat.cc/event/1","Frictionless Hackathon","A livemark site for data storytelling about the Frictionless Community. ","Welcome to team ""Frictionless Community Insights""! Our goal is to create a website, built with Frictionless Livemark, that tells a story about the Frictionless community. Who is involved with Frictionless? Where are they located? What do Frictionless users want from the project? How can the Frictionless team better support the community? We'll answer these questions using data from our recent community survey. We're looking for team members that: can tell a story with data; clean data; want to learn how to use Livemark; care about creating positive communities of practice; etc. Everyone is welcome to join! **Create a livemark site to display the results of the recent Frictionless community survey.** *Goals:* - analyze the data from the survey - display the data using livemark - tell a story: who are our users? what do our users want? what is working/not working for our users?" "13","https://frictionless.dribdat.cc/project/13","Frictionless Covid Tracker","32","Researching","5","A livemark site tracking COVID-19 disease pandemic","","https://avatars.githubusercontent.com/u/5912125?v=4","https://github.com/frictionlessdata/covid-tracker","https://covid-tracker.frictionlessdata.io","https://github.com/frictionlessdata/covid-tracker","https://github.com/frictionlessdata/covid-tracker/releases","https://github.com/frictionlessdata/covid-tracker/issues","","","The main objective of this project was to test Livemark, one of the newest Frictionless tools, with real data and provide an example of all its functionalities. Besides the charts and tables, the information is available on an interactive map, which also takes into account the accuracy of the official data. Join the Frictionless Covid Tracker project to help build a data journalism site with Livemark! We have a prototype (https://covid-tracker.frictionlessdata.io/) and still need help editing...","2021-10-06T20:10","2021-10-18T20:40","roll","lillywinfree","https://frictionless.dribdat.cc/event/1","Frictionless Hackathon","# COVID-19 Tracker A livemark tracking COVID-19 disease pandemic: - https://covid-tracker.frictionlessdata.io ","The main objective of this project was to test Livemark, one of the newest Frictionless tools, with real data and provide an example of all its functionalities. Besides the charts and tables, the information is available on an interactive map, which also takes into account the accuracy of the official data. Join the Frictionless Covid Tracker project to help build a data journalism site with Livemark! We have a prototype (https://covid-tracker.frictionlessdata.io/) and still need help editing the site and adding other functionality. " "6","https://frictionless.dribdat.cc/project/6","Frictionless Tutorials","23","Researching","5","Write new user tutorials for the Frictionless Framework","","","","https://colab.research.google.com/drive/1tTtynfnExykcTYon1j6Y8OgzQZEXpQvP?usp=sharing","","","https://discord.com/channels/695635777199145130/695635777199145133","","","Use Google Colab notebooks to write new tutorials about using the Python Frictionless Framework! The main objective of this project was to write new tutorials using the Python Frictionless Framework. The team not only created a tutorial, but also wrote more [detailed instructions](https://docs.google.com/document/d/1zbWMmIeU8DUwzGaEih0JGJ-DMGug5-2UksRN1x4fvj8/edit?usp=sharing) on how to create new tutorials for future contributors. You can have a look at the tutorial written [during the ha...","2021-09-03T20:20","2021-10-18T20:38","mirianbr","lillywinfree","https://frictionless.dribdat.cc/event/1","Frictionless Hackathon","","Use Google Colab notebooks to write new tutorials about using the Python Frictionless Framework! The main objective of this project was to write new tutorials using the Python Frictionless Framework. The team not only created a tutorial, but also wrote more [detailed instructions](https://docs.google.com/document/d/1zbWMmIeU8DUwzGaEih0JGJ-DMGug5-2UksRN1x4fvj8/edit?usp=sharing) on how to create new tutorials for future contributors. You can have a look at the tutorial written [during the hackathon here](https://colab.research.google.com/drive/1tTtynfnExykcTYon1j6Y8OgzQZEXpQvP?usp=sharing)." "9","https://frictionless.dribdat.cc/project/9","frictionless-geojson","23","Researching","5","A GeoJSON Plugin for frictionless-py","","https://avatars.githubusercontent.com/u/50015545?s=200&v=4","https://github.com/cividi/frictionless-geojson","https://hackmd.io/8olAfjl9TpqaYbyq6qXUNg","https://github.com/cividi/frictionless-geojson","https://github.com/cividi/frictionless-geojson/releases","https://github.com/cividi/frictionless-geojson/issues","","","Work on a frictionless-py plugin to add support for reading, writing and inlining geojson and potentially topojson. Basic read and write support already prototyped in [cividi/frictionless-geojson](https://github.com/cividi/frictionless-geojson). Rethinking the approach by taking specific use cases into consideration, testing compatibility with the main library and refactoring the code will be main purpose of the work during the hackathon preparing the plugin for a first stable release.","2021-09-13T14:08","2022-01-07T16:31","n0rdlicht, avdata99, loleg","n0rdlicht","https://frictionless.dribdat.cc/event/1","Frictionless Hackathon","","Work on a frictionless-py plugin to add support for reading, writing and inlining geojson and potentially topojson. Basic read and write support already prototyped in [cividi/frictionless-geojson](https://github.com/cividi/frictionless-geojson). Rethinking the approach by taking specific use cases into consideration, testing compatibility with the main library and refactoring the code will be main purpose of the work during the hackathon preparing the plugin for a first stable release." "5","https://frictionless.dribdat.cc/project/5","Improve Frictionless Livemark","3","Challenge","-1","Help improve the new Frictionless tool Livemark by solving bugs.","","","https://github.com/frictionlessdata/livemark/","","","","https://discord.com/channels/695635777199145130/695635777199145133","","","Help us make Livemark even better! You can help us by tackling one of these [issues that are good for first-time contributors](https://github.com/frictionlessdata/livemark/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) or by creating a new issue with your own idea!","2021-09-03T20:18","2021-09-03T20:19","johan, Meyrele","lillywinfree","https://frictionless.dribdat.cc/event/1","Frictionless Hackathon","","Help us make Livemark even better! You can help us by tackling one of these [issues that are good for first-time contributors](https://github.com/frictionlessdata/livemark/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) or by creating a new issue with your own idea!" "4","https://frictionless.dribdat.cc/project/4","Improve Frictionless Python Framework","1","Challenge","-1","Help improve the Frictionless Python Framework by solving bugs.","","","","https://github.com/frictionlessdata/frictionless-py/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22","","","https://discord.com/channels/695635777199145130/695635777199145133","","","Do you want to contribute to Frictionless Data but aren't sure how? Check out these issues that are good for first-time contributors: [issue link](https://github.com/frictionlessdata/frictionless-py/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22).","2021-09-03T20:16","2021-09-03T20:16","","lillywinfree","https://frictionless.dribdat.cc/event/1","Frictionless Hackathon","","Do you want to contribute to Frictionless Data but aren't sure how? Check out these issues that are good for first-time contributors: [issue link](https://github.com/frictionlessdata/frictionless-py/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)." "7","https://frictionless.dribdat.cc/project/7","Frictionless Components app","1","Challenge","-1","Use the React components to edit schemas and validate data.","","https://avatars.githubusercontent.com/u/5912125?v=4","https://github.com/frictionlessdata/components","https://components.frictionlessdata.io","https://github.com/frictionlessdata/components","https://github.com/frictionlessdata/components/releases","https://github.com/frictionlessdata/components/issues","","","Add Frictionless components functionality to your existing or new app to describe and modify data with Table component or validate your data with the Report component.","2021-09-07T19:08","2021-09-07T19:08","","lillywinfree","https://frictionless.dribdat.cc/event/1","Frictionless Hackathon","# Frictionless Components [![Build](https://img.shields.io/github/workflow/status/frictionlessdata/components/general/main)](https://github.com/frictionlessdata/components/actions) [![Coverage](https://img.shields.io/codecov/c/github/frictionlessdata/components/main)](https://codecov.io/gh/frictionlessdata/components) [![Registry](https://img.shields.io/npm/v/frictionless-components.svg)](https://www.npmjs.com/package/frictionless-components) [![Codebase](https://img.shields.io/badge/github-main-brightgreen)](https://github.com/frictionlessdata/components) [![Support](https://img.shields.io/badge/chat-discord-brightgreen)](https://discord.com/channels/695635777199145130/695635777199145133) Visual components for the Frictionless Data project in TypeScript/React. ## Purpose - **Visualize Frictionless**: With Frictionless Framework and other Frictionless libraries you can create and edit various metadata structures. Frictionless Components allows to visualize them and provide a user friendly UI to work with them. ## Features - Open Source (MIT) - Reusable React Components ## Example ```javascript const element = document.getElementById('app') frictionlessComponents.render(frictionlessComponents.Report, {report}, element) ``` ## Documentation Please visit our documentation portal: - https://components.frictionlessdata.io ","Add Frictionless components functionality to your existing or new app to describe and modify data with Table component or validate your data with the Report component." "10","https://frictionless.dribdat.cc/project/10","Things not Datasets","1","Challenge","-1","This project sets out to create a wikidata-like site which extracts data about different objects from multiple datasets (example below)","","","","https://osuked.github.io/Power-Station-Dictionary/objects/London%20Array%20Windfarm/","https://github.com/OSUKED/Power-Station-Dictionary/","","https://github.com/OSUKED/Power-Station-Dictionary/issues","","","Currently, different objects (e.g. people, companies, power plants) have data about them spread across various datasets, often with several ids referring to the same object. This project sets out to create a new dictionary schema that acts as a central node within a data catalogue, linking different datasets together based on the objects they describe. Additionally, the dictionary can be used to describe relevant attributes which should be extracted for different objects. The components of the d...","2021-09-19T09:52","2021-09-23T19:30","johan, Ayrton","Ayrton","https://frictionless.dribdat.cc/event/1","Frictionless Hackathon","","Currently, different objects (e.g. people, companies, power plants) have data about them spread across various datasets, often with several ids referring to the same object. This project sets out to create a new dictionary schema that acts as a central node within a data catalogue, linking different datasets together based on the objects they describe. Additionally, the dictionary can be used to describe relevant attributes which should be extracted for different objects. The components of the dictionary are quite simple, with a CSV lookup table that can handle 1-to-many mappings and an extension to the specification which allows you to express which datasets different identifiers refer to. The core change to the schema is the use of `foreignKeys` to link to external datasets that use ids specified in the dictionary, the attributes entry then describes the columns which should be extracted from the dataset. ```python ""foreignKeys"": [ { ""fields"": ""osuked_id"", ""reference"": { ""package"": ""https://raw.githubusercontent.com/OSUKED/Dictionary-Datasets/main/datasets/plant-locations/datapackage.json"", ""resource"": ""plant-locations"", ""fields"": ""osuked_id"", ""attributes"": [""longitude"", ""latitude""] } }, ``` So far a basic dictionary schema has been created and Python code developed to link and extract data relating to objects described in the dictionary (represented in an intermediate JSON file). This data is then used to populate [a website that describes the different objects](https://osuked.github.io/Power-Station-Dictionary/objects/London%20Array%20Windfarm/) in a wikidata-like format. Next Steps: 1. Add a datasets page to the site which renders all of the datasets linked to the dictionary (ideally integrated or built on livemark) 2. Add a dictionary page which provides a high-level overview of the dictionary 3. Enable the option to link multiple dictionaries together 4. The development of ""attribute recipes"", a mechanism to combine extracted attributes (from different datasets) into new data (e.g. power plant output and carbon emissions to calculate carbon intensity) 5. Inclusion of units information - enabling automated generation of derived attributes 6. ""special data"" handling, e.g. enabling spatial data to be extracted and shown on a map (ideally tapping into the LiveMark plugin system) The concept is discussed further in [this video](https://youtu.be/jQfmQyDRo4E?t=1230)" "3","https://frictionless.dribdat.cc/project/3","Dataset List","1","Challenge","-1","Our main goal is to create a website that lists all the datapackages on GitHub.","","https://avatars.githubusercontent.com/u/5912125?v=4","https://github.com/frictionlessdata/data-packages","https://github.com/frictionlessdata/data-packages","https://github.com/frictionlessdata/data-packages","https://github.com/frictionlessdata/data-packages/releases","https://discord.com/channels/695635777199145130/695635777199145133","","","Our main goal is to create a website that lists all the datapackages on GitHub. We'll build the site using Frictionless Livemark.","2021-09-03T19:33","2021-10-08T12:26","johan, GabrielBraicoDornas, roll","lillywinfree","https://frictionless.dribdat.cc/event/1","Frictionless Hackathon","# Data Packages A livemark listing data packages hosted on Github: - https://data-packages.frictionlessdata.io ","Our main goal is to create a website that lists all the datapackages on GitHub. We'll build the site using Frictionless Livemark."