Noah’s End Of Year Reflection

Author: Noah Lee

Hello! I joined the DHAs in January of this year and over the past 5 months, I have learned so much and gotten much-needed experience working in a professional setting where more was expected of me. This has been a great experience and I am glad I got to do it with such nice people. There were a ton of things I learned with the DHAs and a lot I know I can grow from.

Firstly, learning how to communicate formally and do that well. Never in my prior work experiences did I have to connect with professors and other students. While it may be intuitive for others, I found it challenging to send formal emails and also follow up with them and keep communication strong. I got to teach a class of mostly seniors WordPress when I had only started learning how to use it a few months prior. I was extremely nervous, and although I personally thought I could have done better, according to Em I did a good job. I also learned how to communicate with my coworkers too. Throughout my time here, I worked on projects mainly with other DHAs or on projects overseen by my supervisors. I learned to ask questions whenever I got stuck instead of trying to overcome it stubbornly and to continuously check in with my team members – particularly when it came to working on the CARCAS project. The CARCAS project was what I spent most of my time working on with Erin and centered around uploading and maintaining 3D scans of various carcasses on the CARCAS website.

Screenshot of the poster-script I wrote for CARCAS using Node.js and Puppeteer running locally.

Another skill I got from working with the DHAs was how to learn new technologies quickly. Git, Conda Environments, Node Scripts, Puppeteer, DataLad, Model-Viewer and WordPress are just some of the technologies I had to learn on the fly while working with the DHAs. At first, I had a lot of trouble reading documentation, and trying to learn so many new technologies and applying them on such large projects like CARCAS. However, I got much better at it, and in my other classes like CS 347, I start to realize that part of being a great Computer Scientist is being able to learn new technologies quickly – as technology is always advancing. The skill of learning new technologies is definitely not something I have mastered yet – particularly with reading large volumes of documentation – but is something I definitely can say that I got better at; and being with the DHAs definitely made me realize the importance of it.

Finally, another skill that I can say I learned is how to manage time effectively. The work with the DHAs was very independent. We would choose a task and it was up to us to take the necessary steps to get the task done. Em or Austin never would remind me to do something at a certain time, it would all be on me. This requires a lot of self-control and responsibility to make sure I send the emails to the right people on time or take the necessary steps to get the poster script on the server by a certain date. I also had to balence working with the DHAs while taking up a lot of high level classes this term. I had to learn to say I was too busy when offered a task – which was something I used to find difficult – especially at the start of my time here where I would accept any and all tasks presented to me.

Unfortunately, with the swim season starting again next fall and me taking even more difficult classes next term, I will be making the apt decision to step back from the DHAs for the time being as I focus on my academics and the swim season. Being a DHA has been a wonderful experience that I will forever be grateful for as my first comprehensive work experience and I look forward to working with the DHAs in the future.

Goodbye for now!

Meilleur endroit pour acheter du viagra en France . Actualités et tendances du marché. Recommandé par les experts pour acheter du viagra en France à Paris ! Confidentialité et sécurité assurées. Recommandé pour acheter du viagra en France ! Informations détaillées et transparentes. L’endroit le plus sûr pour acheter viagra sans ordonnance . Soutien professionnel 24/7. Легальное Лев казино с дружелюбной поддержкой Интересное Lev casino официальный сайт с бесплатными вращениями Надёжное Лев казино официальный сайт без регистрации Развлекательное Lev casino бонус на сайте Новейшие игровые автоматы с акциями и скидками Весёлые игровые автоматы демо от лучших производителей Эксклюзивные игровые автоматы демо с крупными выплатами

Using Generative AI for DH

Digital Humanities (DH) is brimming with passionate individuals eager to explore the depths of human culture and history through the lens of technology. Now, a revolutionary tool is transforming the way these enthusiasts approach their work: artificial intelligence (AI). This article delves into the exciting applications of AI in the DH landscape, exploring how it can power people to work smarter, delve deeper, and unlock new avenues of understanding. I will show various ways in which AI can be used for DH work.

Writing

While AI cannot replicate the human touch and creativity fundamental to writing, it offers a diverse toolbox that can significantly enhance the writing process. From overcoming writer’s block and generating initial ideas to conducting research, checking grammar and style, and even exploring different writing styles, AI provides writers with a range of valuable tools to streamline their work and fuel their creative exploration. However, it’s crucial to remember that AI serves as a collaborator, not a replacement, in the writing journey. It is the human writer who ultimately wields the power of the pen, harnessing the capabilities of AI to refine their craft and unleash their unique voice.

Generative AI such as ChatGPT and Google Gemini can help in multiple ways with writing. They can make points for your blog, essay, or post. They can correct any grammatical mistakes. They can rewrite certain sentences. They can get you started with writing by writing the introduction to your piece. The figure below shows part of a response by Google Gemini AI when I asked it if AIs can write.

While one can not fully rely on AIs to write, they certainly are very useful as writing tools especially when you are providing them your own ideas. There are certain copyright implications when AIs are used for the generation of images but these concerns are highly reduced when AIs are employed for writing, especially when the human user is providing a unique idea that can be employed by the AI to write.

Translation

AI has revolutionized the field of translation by offering a suite of powerful tools and techniques that enhance the efficiency and accuracy of the translation process. Machine translation systems, such as Google Translate and DeepL, employ advanced algorithms like neural machine translation (NMT) to translate text between languages. These systems continuously improve through machine learning, analyzing vast amounts of translated data to refine their translations and capture nuances more effectively. Furthermore, Generative AIs such as Gemnini and Chat GPT also have their own peculiar way of translation that is distinct from tools like Google Translate. AI-driven translation memory tools, like SDL Trados and MemoQ, store previously translated segments and suggest them to translators when encountering similar content. This not only accelerates translation but also ensures consistency across documents and projects. Natural Language Processing (NLP) techniques further enhance translation quality by enabling AI systems to understand and generate human language more accurately. NLP algorithms analyze sentence structures, grammar rules, and contextual clues to produce translations that are contextually relevant and linguistically precise.

In addition, AI assists in managing glossaries and terminology databases, ensuring consistency of terminology throughout translations. These tools automatically identify and suggest appropriate translations for specific terms, reducing errors and maintaining coherence. AI can also aid in post-editing machine-translated content by providing suggestions for improving fluency, readability, and accuracy. Post-editing tools analyze translated text and offer alternative phrasing, correct grammatical errors, and highlight potential mistranslations for human editors to review and refine. Moreover, AI-driven content generation platforms assist in creating multilingual content by automatically translating existing texts into multiple languages. While these systems may not match the quality of human translation entirely, they serve as a valuable starting point for further refinement by professional translators. Overall, while AI has significantly streamlined and enhanced the translation process, human translators remain essential for tasks requiring cultural understanding, creative adaptation, and linguistic nuance, ensuring the highest quality of translation output.

Many such services are still under-development and free access is limited to Chat GPT and Gemini but in the future, we can expect to get more access to such tools that will significantly increase the speed and accuracy of translation. This can have major implications for DH work in various languages and for creating multilingual DH projects.

Image Generation

The realm of visual creation is undergoing a dramatic shift with the emergence of AI-powered image generation. This innovative technology empowers users to translate their written descriptions into stunning visuals, spanning the spectrum from photorealistic landscapes to abstract artistic expressions. Tools like DALL-E and Midjourney allow users to describe their desired image using specific keywords and phrases, prompting the AI to generate visuals in various styles, color palettes, and compositions. These tools unlock a universe of possibilities for artists, designers, and even casual users, enabling them to bring their creative visions to life in an entirely novel way. However, it’s crucial to acknowledge that AI image generation is still in its infancy. While tools like Stable Diffusion offer advanced customization options like image size and specific details, ethical considerations remain paramount. Concerns regarding potential biases within the training data and the ownership of AI-generated artwork are crucial aspects of this rapidly evolving technology. As this technology continues to develop, addressing these concerns will be essential to ensure its responsible and ethical application in the realm of visual creation.

If these ethical concerns are settled, something which seems unlikely, then these image generation AIs can prove to be very helpful for DH work, helping us create pictures and illustrations. OpenAI is now even testing video generation which can prove to be even more useful and help with a variety of DH projects.

Coding

Another field in which AI can be very helpful is generating code. AI is revolutionizing code generation, aiding developers in various tasks. Through neural networks, it offers auto-completion tools, speeding up coding with intelligent suggestions. It also assists in code synthesis from high-level specifications, enabling faster development. AI aids in refactoring and optimization by identifying inefficiencies and suggesting improvements. Additionally, it facilitates rapid prototyping by generating and refining code iteratively. Despite challenges, AI promises to reshape software development, making it smarter and more efficient. The figure below shows what Gemini AI gave as output when I asked it for a certain code.

Conclusion

In conclusion, I think it is important to acknowledge the various ways in which AI can help us make our work better and more efficient. At the same time, there are technical and ethical concerns that are attached to it. Technical concerns include that the writing style of AI is different than humans, the code it might generate might be wrong, the images might have some problems, or the translation output by it has problems. At the end, we need to find the errors and correct them. That is where the human factor remains very important.

Choix de confiance pour viagra ! Traitement des commandes rapide et efficace. Sélection numéro un pour viagra . Livraison express disponible. Site fiable pour viagra sans ordonnance en France ! Des solutions sur mesure disponibles. L’endroit idéal pour viagra en France à Paris . Des solutions sur mesure disponibles. Фирменное казино Лев с высокими лимитами Уникальное казино Лев зеркало с анимацией Быстрое Лев казино официальный сайт с выводом выигрыша Роскошное casino Lev бонус с гарантией честности Бесплатные игровые автоматы демо с риск-игрой Горячие игровые автоматы с поддержкой клиентов Огромные игровые автоматы демо с дополнительными возможностями

Using Gale Digital Scholar Lab: Utilizing n-grams

An introduction to GDSL and its tools has already been given in a previous blog post. In this blog, I will attempt to explain the utility of another GDSL tool, namely n-gram. An n-gram is a contiguous sequence of n items from a given sample of text or speech. These items can be characters, words, or even other units like phonemes or syllables, depending on the context. N-grams are widely used in natural language processing (NLP) and computational linguistics for various tasks, including language modeling, text analysis, and machine learning.

The “n” in n-gram represents the number of items in the sequence. Commonly used n-grams include:

  1. Unigrams (1-grams): These are single items, which are typically individual words. For example, in the sentence “The quick brown fox,” the unigrams are “The,” “quick,” “brown,” and “fox.”
  2. Bigrams (2-grams): These consist of pairs of adjacent items. In the same sentence, the bigrams would be “The quick,” “quick brown,” and “brown fox.”
  3. Trigrams (3-grams): These consist of sequences of three adjacent items. For the same sentence, the trigrams would be “The quick brown” and “quick brown fox.”

N-grams are often used in language modeling to estimate the probability of a specific word or sequence of words occurring in a given context. They are also used in various NLP tasks, such as text generation, machine translation, and sentiment analysis. N-grams provide a way to capture some of the context and relationships between words in a text, which can be useful for many language-related applications.

In GDSL, the n-gram analysis can be used in two ways:

  1. Word Cloud: Word Cloud is a visual representation of a collection of words, where the size of each word is proportional to its frequency or importance in the text. Typically, word clouds are used to quickly and visually convey the most prominent words in a piece of text, making it easy to identify the most common or significant terms at a glance.
  2. Term Frequency: Term Frequency (TF) is a fundamental concept in natural language processing, information retrieval, and computational linguistics. It serves as a quantitative measure of the frequency of occurrence of a specific term or word within a document or text corpus, thereby aiding in the assessment of the term’s significance and relevance in a particular textual context. In essence, TF offers a means to quantify the emphasis placed on individual terms within documents

Both these tools can provide a useful way to understand the main concepts, ideas, and words in a textual corpus. Here is an example of a word cloud made from our test content set.

To attain precision in n-grams, qualifiers in search can be utilized. First, create a content set CS with parameters X and Y. Then generate a hypothesis Z about CS. Z could be about the influence of another factor, an explanation behind certain events, or a correlation with other factors. Once the hypothesis has been generated, incorporate it into your search by adding yet another parameter that corresponds to Z. Now, the new content set created by parameters X,Y and Z would be a subset of the prior content set. Analyzing (A∩B)’ union would give insight into what data was not taken into account when parameter Z was introduced. This can usually aid in identifying different clusters of data within the same corpus. In this case, the word clouds can also aid in visual identification since the word clouds would appear to be different for the two content sets.

For example, compare the first word cloud of the data set with parameters X and Y where X = Pakistan, Y = War and function = AND. The hypothesis here was that in this content set, there are two clusters; one that reports the war between India and Pakistan and another that reports the war between Pakistan and Afghanistan (and the Soviet Union). To check for this, parameter Z was added (Z = India). Given this, (A∩B)’ must be analyzed. And rightly so, Soviet is not found in XYZ but is available in XY. This confirms our hypothesis.

Although this might be a little complex, it can help greatly in understanding and qualifying data.

The place of the missing data can also tell about the frequency of it in XY as a whole.

Choix supérieur pour acheter du viagra ! Sécurité des transactions en ligne. Endroit de confiance pour viagra sans ordonnance ! Respect des délais de livraison. Destination idéale pour viagra prix à Paris ! Confiance et satisfaction garanties. La meilleure destination pour viagra prix . Livraison à l’échelle nationale. Фирменное Лев казино зеркало с моментальным выводом Быстрое казино Лев зеркало с персональными настройками Захватывающее казино Лев зеркало доступно сейчас Крупное Лев казино бонус с выводом выигрыша Интересные игровые автоматы бесплатно с поддержкой языков Элитные игровые автоматы демо с лицензией Непревзойдённые игровые автоматы демо с шансом на успех