• 0 Posts
  • 53 Comments
Joined 2 years ago
cake
Cake day: September 9th, 2023

help-circle





  • being able to control the player from an android phone was so convenient and I don’t know any other player that has similar.

    Well, you can remote control playback in Kodi through apps like Kore, and browse the libraries, but it’s a totally different experience in comparison to dedicated music player apps. Kodi is more like software for a home theater PC, a.k.a. media center.

    The best viable solution I can think of, that includes a desktop UI and remote control from a phone, would be hosting a Jellyfin server for the music library, then using the client app for Android to remotely control another client app running on your desktop. I do that everyday (but mostly for video content), since I’m using my phone to control playback on a Raspberry Pi running Kodi with the “Jellycon” client add-on, but that could be any other Jellyfin client, such as a regular Jellyfin desktop client.







  • Thinking a bit outside the box, if your phone is capable of it, you could find a way to run a small local LLM on it. Maybe it can even be done in Termux?

    If that’s not an option and/or you need a bigger, more capable model, you could host a local Ollama instance, and connect to it from the Ollama (IzzyOnDroid) or GPTMobile (F-Droid). This way you will only connect to yourself instead of some 3rd party translation or LLM provider.

    I think that, with a well-written system prompt, you could make it more efficient by concisely instructing it to expect your text input and a language (or include permanent language instructions in system prompt), to then only output the translated version of your input in that language. This will keep the number of input+output tokens low, thereby saving some inference. You can also get creative and instruct it to output multiple variations, change the style/tone/formatting, provide an example sentence containing a single translated word, etc…


  • Dear C-Keen.

    I’m also seekin’. Seekin’ answers!

    I’m curious as to where the line is drawn in mixed cases, since I’m experimenting with both pixel art painting and txt2img/img2img models, and have had thoughts of combining the two to generate extensions of my own works.

    Relevant examples of unclear cases to clarify:

    1. Starting from an original work as the core “seed”, outpainting the world by expanding the canvas around it, continuing the work based on that input.
    2. Same as 1., then manually adding further edits after doing the txt2img outpainting.
    3. Starting from own original non-generated work of art, using some style transfer to generate a much similar edition in a pixel art style.
    4. Starting from own text prompt to generate some pixel art, then manually editing that.

    I understand that this is about appreciating artists. Pixel art is a craft with a rich history, and it’s a dogma, which can be helpful and fun for artists to get going. Generating the works seem meaningless from that perspective, but I’d argue, nonetheless, that all of the examples above (not just any gen. pixel art) are a continuation and natural development of the craft, which has already been changing through the times. From analog embroidery through digital ages of computers and software. Should we keep insisting on crafting the traditional way, or can we use modern tools? How many colors are allowed, if we want to stick to earlier pixel art traditions here?

    As I see it, all of the listed examples require a certain degree of artistic work, and couldn’t have existed without that, but use txt2img or img2img generation as tools as part of the artistic process, experience and output. One could argue that these pieces of art represent the current state of the craft, and that artists working with these tools should not be excluded from here. But on the other hand I fully respect the existence of the opinion that they should, to collect and adore only the 100% manually painted works within this community.

    I find it easy to understand your decision regarding 100% generated pixel art, but the next question that rises is how will you point out the generated ones that are being posted? Except for obviously incorrect stuff like a gradient “pixel” challenging the technical limitations of real works. But, then again, what if an artist manually draws such glitches by faking a lower resolution than the actual image file and breaking it, shouldn’t that be allowed?

    Please don’t get me wrong. I’m not here to oppose any decisions or defy one ruling over another. I just think these questions are interesting to ask. Have you thought of how to actually enforce this? If you don’t want 100% generated works here, how will you make sure to find those that are, and only those? Or if you want absolutely 0% generated pixels in here, how will you find the ones with any at all, like in the five examples I provided? The community hivemind? How to avoid false accusations?

    I’m looking forward to hear your thoughts, whatever they are. Best regards.



  • pirat@lemmy.worldtoF-Droid@lemmy.mlShowly
    link
    fedilink
    arrow-up
    3
    ·
    7 months ago

    The reason could be that Trakt integrates with media server apps like Jellyfin or Plex, or apps like Kodi, and that you can thereby bring your watch history with you across apps and you don’t lose it if your server crashes, library is corrupted or something… I have never used it, but I’d imagine that’d be a reason to use it. If I knew of a libre alternative, I’d actually consider using it for Jellyfin.


  • pirat@lemmy.worldtoF-Droid@lemmy.mlLauncher Kvaesitso
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    7 months ago

    No idea why, but I don’t see their comments anywhere in this thread. Thanks for confirming.

    EDIT:

    I found this metadata file, is that the one?

    https://gitlab.com/fdroid/fdroiddata/-/blob/master/metadata/de.mm20.launcher2.release.yml

    From the file:

    MaintainerNotes: |- Kvaesitso uses several external APIs for search providers. Several of them require signing up to obtain a developer API key: gdrive search, openweathermap, HERE and Meteorologisk institutt. It’s not possible for users to provide these keys as explained here: https://github.com/MM2-0/Kvaesitso/issues/227#issuecomment-1366826219 If keys are not provided, these features are automatically disabled during the build.

    core/shared/build.gradle.kts and plugins/sdk/build.gradle.kts have configurations in them for publishing artifacts to maven repos. They are not used during the build, but detected by F-Droid scanner anyway. We patch it out from core/shared/build.gradle.kts, since this module itself is still used in compilation, and delete plugins/sdk/build.gradle.kts because it’s not used in app compilation.

    Kvaesitso depended on different libraries used for gdrive login in the past that pulled GMS dependency, however it’s not the case anymore:

    https://github.com/MM2-0/Kvaesitso/issues/583#issuecomment-1775268896 The new libraries pull OpenTelemetry though, but it’s unclear if it’s used (considering gdrive integration is disabled).

    Max heap size is reduced in gradle.properties to avoid gradle daemon being killed by OOM manager.

    Older versions of Kvaesitso had onedrive integration that depended on non-whitelisted maven repos, but it was removed.

    Upstream provides an fdroid flavor, however there’s no difference with default flavor except for different versionName.

    For some reason, F-Droid fails to pick up the correct gradle version from distributionUrl if subdir is used.

    It seems to be the case that F-Droid removes gdrive and onedrive in their build. Though, there seem to be no mentions of Wikipedia.



  • This app promotes or depends entirely on a non-free network service

    When viewing the app in F-Droid, the note below this part tells, that it uses a third-party service for currency exchange rates.

    I don’t know if the fact that it can show Wikipedia results, and that you can connect it to your Google account (to show cloud files from Drive and such in the search results) plays a role too, but it isn’t specifically mentioned under the anti-features… On a sidenote, searching your own Owncloud or Nextcloud is supported too.