Show thread history
jb55
2mo ago
I’m just adding local nip50 search support to damus clients, but this looks like a toy when compared to primals advanced search which you can’t really do on the protocol at the moment. So when i launch it people are going to be like “lame, we already had this on primal”. So unless I do something proprietary or spend the next 6 months trying to get advanced search into the protocol, then proprietary clients will always win.
My only conclusion is that clients will start competing with each other using off-spec advanced features, which means that you won’t be able to build a pure client anymore. Then nostr fails.
My only conclusion is that clients will start competing with each other using off-spec advanced features, which means that you won’t be able to build a pure client anymore. Then nostr fails.
See translation
5
9
1
0
0
Replies
Thibaut
@Thibaut
2mo ago
I don't have a problem with proprietary tech or platform building on Nostr.
Many proprietary business can be build on this huge transparent and verifiable dataset. AI and LLM is one example. Search is another one. As long as this dataset remains open and not a walled garden, proprietary technologies built on top of Nostr will not signal its decline — quite the opposite.
Many proprietary business can be build on this huge transparent and verifiable dataset. AI and LLM is one example. Search is another one. As long as this dataset remains open and not a walled garden, proprietary technologies built on top of Nostr will not signal its decline — quite the opposite.
See translation
1
2
1
0
0
Joe Ruelle
@JoeRuelle
2mo ago
I'm thinking somewhat along these lines too. I feel like the protocol, even in its earlier stages, is enough to enable some fantastic and scalable use cases. If the protocol doesn't need to further develop then clients can't so easily be accused of siphoning off development resources from the protocol in a zero sum game.
The problem as I see it is that Twitter-style microblogging, or what they call "Big World" over at ATprotocol to emphasise the consistent global view, isn't one of those use cases. If the end goal is to be at least sort of like Twitter (plus censorship resistant, etc.) then the protocol will always be demanding of more resources, as we'd basically be asking magic of it, and the zero sum game would continue to be played day in and day out.
It's a tough one though because Big World microblogging is
The problem as I see it is that Twitter-style microblogging, or what they call "Big World" over at ATprotocol to emphasise the consistent global view, isn't one of those use cases. If the end goal is to be at least sort of like Twitter (plus censorship resistant, etc.) then the protocol will always be demanding of more resources, as we'd basically be asking magic of it, and the zero sum game would continue to be played day in and day out.
It's a tough one though because Big World microblogging is
... See more
See translation
0
1
1
0
0
322905e2d7
@322905e2d7
2mo ago
What I'm still not convinced of is that proprietary nostr clients / ecosystems make much sense as a business model.
Twitter can barely survive bombarding people with shitty ads. How exactly is Primal supposed to make a profit?
Twitter can barely survive bombarding people with shitty ads. How exactly is Primal supposed to make a profit?
See translation
1
0
0
0
0
Joe Ruelle
@JoeRuelle
2mo ago
This is also my question.
See translation
0
1
0
0
0
sudocarlos
@sudocarlos
2mo ago
It's time to implement the "green chat bubbles". Not sure what that is in this case, but users of misbehaving clients could be made aware by their followers "hey, this thing you shared looks weird on my proper nostr client" instead of compensating for broken compatibility.
See translation
0
2
0
0
0
au9913
@au9913
2mo ago
It's so insane to me that an primal has users. They were requested to implement amber 1 year ago and still haven't, meanwhile 75% of new android apps implement within the first month of release.
Are you thinking that they have a DB that is storing as much notes as possible for search or what?
Are you thinking that they have a DB that is storing as much notes as possible for search or what?
See translation
0
1
0
0
0
TKay
@TKay
2mo ago
The primal advance search is not just a filter?
Or is a filter something we can’t do right now?
Or is a filter something we can’t do right now?
See translation
1
0
0
0
0
jb55
@jb55
2mo ago
It filters based on min zaps, excluded words, and things like that, you can’t really do that without a custom relay as relays aren’t really supposed to know about zaps.
nip50 was nice because it standardizes fulltext search, so i could make damus local search compatible with relays. I can’t really do min zap advanced search “properly” without a relay nip, but i can hack it with nostrdb at least, and just try to pull down as much data as possible.
nip50 was nice because it standardizes fulltext search, so i could make damus local search compatible with relays. I can’t really do min zap advanced search “properly” without a relay nip, but i can hack it with nostrdb at least, and just try to pull down as much data as possible.
See translation
1
2
0
0
0
TKay
@TKay
2mo ago
Oh you mean you can’t send a REQ with this level of detail.
i thought they were pulling what they can pull at the level the relay can understand from a REQ and then applying the filter on client side.
or at least everyone else can do that. I thought that’s the idea of having a local relay and a DB. You can cut slice and filter as your please, no?
i thought they were pulling what they can pull at the level the relay can understand from a REQ and then applying the filter on client side.
or at least everyone else can do that. I thought that’s the idea of having a local relay and a DB. You can cut slice and filter as your please, no?
See translation
1
1
0
0
0
jb55
@jb55
2mo ago
You can do that but it wouldn’t be very efficient and you would get fewer results, and wouldn’t scale. They will have to do it on their relay if they aren’t already. We can do it locally but there is no corresponding remote REQ we can execute to get the data. Maybe just negentropy syncing all the zaps for the day would work well enough for a little while.
See translation
0
1
0
0
0